\secondreader{Jos\'e Meseguer\\(Center for Study of Language and Information)}
\figurespagefalse
\tablespagefalse
\submitdate{November 1991}
\beforepreface
\begin{abstract}
Among the various formal models proposed to provide semantics for concurrency constructs in programming languages, partial orders have the advantages of conceptual simplicity, mathematical tractability, and economy of expression.
We first observe that the theory of enriched categories supplies a natural abstraction of the notion of partial order, the \%D-schedule.
Varying the choice of temporal domain \%D allows for other forms of temporal constraint beyond that available from simple ordering.
For example, having the constraints on inter-event delays be numeric bounds produces a generalized metric-space structure suitable for the discussion of real-time computation.
We then construct an algebra of {\em processes} parametrized by notion of time.
Here a process is a structure based on schedules that also incorporates nondeterminism.
Since the model is category-based, we can define operations on \%D-schedules and processes via universal constructions that depend little on the choice of \%D.
Also, given a suitable choice of process structure and process morphism, the constructions used for process operations and schedule operations are remarkably similar.
\end{abstract}
\prefacesection{Acknowledgements}
I would first of all like to thank my advisor, Vaughan Pratt, whose ideas, inspirations and criticisms have contributed substantially to this work. Ross Casley and Jos\'e Meseguer, as collaborators, have also had a significant influence. Committee members John Mitchell and Robert Floyd provided useful comments.
Conversations with various other people, including John Power, Mike Johnson, Phil Scott, Bard Bloom and Albert Meyer have also been quite helpful.
This research was supported in part by a National Science Foundation Graduate Fellowship.
And yes, I used Paul Taylor's commutative diagram macros. For that matter, I also used Don Knuth's \TeX, Leslie Lamport's \LaTeX, Supoj Sutanthavibul's {\tt fig} package, Richard Stallman's GNU Emacs and numerous other pieces of software without which this work would have taken much longer to produce. Nor would it be right to ignore the excellent facilities and support provided by Stanford's Computer Science Department and CSD-CF.
Lastly, I am profoundly grateful to my parents (not to mention the rest of my family for that matter) for the encouragement and support they have provided over the years.
\afterpreface
\prefacesection{Outline}
In Chapter \ref{chapter:tstruct}, we present the concept of schedule, a collection of events with temporal constraints, which is rooted in the various extant partial order models of concurrency \cite{Gre,Gra,NPW,MS,Pr82,PW}, particularly that of the labeled partial order or {\em pomset} model of Grabowski \cite{Gra} and Pratt \cite{Pr82}.
Partial orders themselves are a sort of primaeval form of schedule having no further temporal information beyond some order constraints between events.
We consider the similarities of the axioms of partial orders and metric spaces
the goal being to define abstract schedule operations that are applicable to all.
This is done by having an appropriate notion of schedule map or {\em morphism} and then using various universal constructions from category theory.
Much of the material presented in this chapter is actually joint work with Casley, Meseguer and Pratt, and as such is mainly developed elsewhere (see \cite{CCMP,Cas91}).
It does, however provide necessary groundwork for the process model presented in the following chapter.
Chapter \ref{chapter:proc} is the main focus of the thesis and is concerned with introducing choice or nondeterminism into the model.
The initial vehicle for this is the disjunctive-normal-form (set of sets) representation of process that is inherent in the original partial order model and indeed in the trace theoretic model from which it derives.
Once the appropriate additional structure is determined and once we have an appropriate definition for {\em process} morphism, many of the categorical definitions for operations go through as for schedules.
As an example, Chapter \ref{chapter:dflow} considers the application of this model to dataflow nets, recasting a result of Gaifman and Pratt \cite{GP} from partial orders to the more general setting of physical \%D-schedules.
The most basic notion of {\em schedule} is that of set of events
together with some further collection of constraints
on these events that determine when they are allowed to occur.
At first glance, it may seem sufficient to refer to one's intuitive notion of an event,
remarking that we leave our events uninterpreted;
i.~e., the question of whether a particular event
is a disk-read or a light bulb turning on is of no concern to us.
However, it should be clarified that in our scheme, events are atomic or {\em instantaneous} entities.
This becomes important when we generalize the partial order notion of time, with its vague notions of
`before' and `after', to a metric notion with actual numerical delays.
For example, if we have events corresponding to a light bulb turning on and a door closing,
what we actually need are particular moments,
e.~g., the instant in which the filament achieves two-thirds of its eventual brightness and the instant the
door first contacts the doorjamb, in order to make meaningful a phrase like
``the door closes 5 seconds after the light goes on.''
It may seem unduly restrictive to insist that events be atomic when, in reality,
almost none of the particular actions one wishes to model are actually atomic or instantaneous.
However, in our usage, ``event'' means the referential moment rather than the full action.
In our example, the closing of a door, which undoubtedly takes a few seconds to complete,
is not an event {\em per se}.
What usually happens, however is that one is not concerned with the duration of the door-closing action,
and so one merely choses or takes as implicit an arbitrary moment during the swing of the door and
identifies that event with the entire process of the door being closed.
This interpretation makes it possible to consider ``refinement of events'' in a realm where events are atomic,
since now one is not refining an event so much as
refining the particular tasks or actions of which that event is a part.
The result of a refinement is {\em not} a subdivided event,
but rather the inclusion of several additional referential moments of a given task or action into our schedule.
The events themselves remain instantaneous, we just see more of them.
We next consider the question of the form the constraints should take.
Perhaps the coarsest notion of constraint on a set of events is that of
an ordering, namely that of a set of events, E, with a partial order
relation $\le$ constraining the temporal order in which these events
may occur. In this case $\le$ satisfies the usual axioms.
\tracingmacros=1
$$\eqalignno{&x\le x&(R)\cr
x\le y \land y\le z\implies &x\le z&(T)\cr
x\le y \land y\le x\implies &x = y&(A)\cr
Recall the typical interpretation for $x\le y$,
namely that of being an abbreviation for ($x<y$ or $x=y$),
that is, ``Either $x$ occurs strictly before $y$ or $x$ and $y$ are the {\em same event}.''
Curiously enough, this is not merely a statement about the relative timings of $x$ and $y$,
but the latter half makes an additional assertion about the identities of the events themselves.
In particular, the antisymmetry axiom makes it impossible to constrain two distinct events to be simultaneous, since to do so in a partial order setting means imposing on a pair of events $(x,y)$ both of the constraints $x\le y$ and $y\le x$, which leads inescapably to their not being distinct.
Aside from this question of why one should not be allowed to constrain distinct events to be simultaneous,
it will be seen that the omission of the antisymmetry axiom allows for a simpler framework.
Thus, for the most basic notion of schedule,
we prefer instead to use {\em preorders} (sometimes known as {\em quasi-orders})
which only satisfy $(R)$ and $(T)$.
As an alternative notion of temporal constraint,
consider the standard axioms for a metric space $(E,d:E\times E\to\_R^{+})$.
$$\eqalignno{d(x,x)&=0&(R)\cr
d(x,y) + d(y,z) &\ge d(x,z)&(T)\cr
d(x,y)=0 &\implies x = y&(A)\cr
d(x,y)&=d(y,x)&(S)\cr
Here, it suffices to note the similarity of form between
these axioms and those for partial orders.
The metric space is our paradigm for a more detailed notion of schedule,
one specifying actual numerical delays between events rather
than simply indicating the temporal precedences.
As with partial orders, there are objections that may be raised to various axioms,
e.~g., the axiom $(A)$ inspires the same objection as the correspondingly labeled axiom for partial orders,
namely that it forbids distinct simultaneous events.
As with partial orders, we shall see that discarding the offending axioms, in this case $(A)$ and $(S)$, leads to a useful structure in its own right.
However, in this case what we are left with is actually quite different from the usual notion of metric space.
\section{The Temporal Domain}
We now consider the particular common features of preorders and metric spaces
that should be kept when abstracting to a general notion of schedule.
There is a set of events E,
and a constraint map $\^d:E\times E\to\%D$ giving the temporal
constraint the schedule assigns to each pair of events.
Here \%D is an abstract domain of temporal constraints,
with the following attributes:
\begin{itemize}
\item
some constraints imply others,
so we at least have a partial ordering $\sqsubseteq$ on the constraints of \%D
\item
a particular constraint (call it the {\em zero} or {\em instantaneous} delay)
$I$ that bounds the delay between an event and itself.
\item
a sequential composition of constraints $\otimes$, i.~e., given
constraints $d$, $e$ on two consecutive delays, we should be able to
infer a constraint $d\otimes e$ on the combined delay.
\end{itemize}
Naturally, $\otimes$ should be monotone,
i.~e., if either $d$ or $e$ are strengthened,
$d\otimes e$ should be a correspondingly stronger constraint.
Also, for any 3 consecutive constraints,
it should not matter which two are composed first.
Finally, if we compose a constraint with the instantaneous constraint $I$,
we should not obtain any new information.
\begin{definition}
An {\em ordered (commutative) monoid} $(\%D,\sqsubseteq,\otimes,I)$ is a set \%D with a partial order $\sqsubseteq$, a binary operation $\otimes:\%D\times\%D\to\%D$, and a distinguished element $I\in\%D$ such that $(\%D,\otimes,I)$ is a (commutative) monoid, i.~e., for all $d,e,f$ in \%D
\begin{eqnarray}
d \otimes (e \otimes f) &=& (d \otimes e) \otimes f\\
d \otimes I ~=&d&=~ I\otimes d\\
(d\otimes e &=& e\otimes d)
\end{eqnarray}
\noindent and such that $\otimes$ is monotonic with respect to $\sqsubseteq$, i.~e.,
\begin{eqnarray}
d\mathbin{\sqsubseteq}e \mathrel{\land} f \mathbin{\sqsubseteq} g &\implies& (d \otimes f) \mathbin{\sqsubseteq} (e \otimes g)\label{om:monot}
\end{eqnarray}
\end{definition}
\noindent
There are also logical operations.
\begin{itemize}
\item
A given set of constraints $\{d_i\}_i$ applying to some inter-event delay
can be equivalently stated as a join (conjunction) $\bigvee_i d_i$,
\item
There is a null (weakest) constraint, $0$.
\end{itemize}
To formalize the above, we recall the following definition
\begin{definition}
For any partial order $(\%D,\sqsubseteq)$ and any subset $S$ of \%D, a {\em join} of $S$ (denoted $\bigvee S$ or $\bigvee_{s\in S}s$)
is any element $\bar s\in\%D$ for which
\begin{itemize}
\item
$s\sqsubseteq\bar s$ for all $s\in S$, and
\item
if $t\in\%D$ is such that $s\sqsubseteq t$ for all $s\in S$, then $\bar s\sqsubseteq t$.
\end{itemize}
\end{definition}
Antisymmetry of $\sqsubseteq$ guarantees that joins are unique.
\begin{definition}
A {\em (join) semilattice} $(\%D,\sqsubseteq,0)$ is
a partial order $(\%D,\sqsubseteq)$ together with a least element $0$
having the property that every finite subset of \%D has a join,
$0$ being the join of the empty set.
A semilattice is {\em complete} if {\em every} subset has a join.
\end{definition}
As it happens, a semilattice can just as easily have specified using only the join rather than the ordering $\sqsubseteq$:
\addtocounter{theorem}{-1}
\begin{definition}{\bf [alternate]}
A {\em complete semilattice} $(\%D,\bigvee,0)$ is a set \%D, together with a distinguished element $0$ and a function $\bigvee:\%P(\%D)\to\%D$ (here $\%P(\%D)$ is the powerset of \%D) such that
These two definitions of complete semilattice are equivalent\cite{Joh82}; given a semilattice of the second form, the ordering $\sqsubseteq$ can be derived from the join operation
$$d\sqsubseteq e\iff\bigvee\{d,e\}=e$$
The corresponding alternate definition for ordinary semilattices uses $\%F(\%D)$, the set of {\em finite} subsets of \%D, and only requires (\ref{sl:comp}) to hold for finite collections of finite subsets of \%D.
In any case, we see that the desired structure for our temporal domain is something that is a combination of a complete join semilattice and an ordered monoid.
\begin{definition}
A {\em (commutative) closed semiring} is any tuple $(\%D,\bigvee,\otimes,0,I)$ such that $(\%D,\bigvee,0)$ is a complete semilattice, $(\%D,\otimes,I)$ is a (commutative) monoid and such that the following distributive properties hold
\bigvee_i(d \otimes e_i) &=& d \otimes \bigvee_i e_i\label{csr:dist}
\end{eqnarray}
\end{definition}
Note that this distributivity implies the monotonicity (\ref{om:monot}) stipulation for ordered monoids.
We have now arrived at the desired notion of temporal domain:
\begin{definition}
A {\em temporal domain} \%D is a commutative closed semiring
$(\%D,\bigvee,0,\otimes,I)$.
\end{definition}
Though there is no reason to suppose {\it a priori} that the sequential composition $\otimes$ need be a commutative operation, restricting ourselves to this case simplifies the discussion.
\subsection{Monoidal Categories}
Closed semirings are a special case of monoidal categories that are closed and cocomplete. Recall that a (small) category \%D consists of
\begin{itemize}
\item
a set of {\em objects}, $\ob(\%D)$
\item
for every pair of objects, $d,e\in\ob(\%D)$, a set ({\em homset}) of {\em morphisms} $\:\Hom(d,e)$\goodbreak
(a typical morphism therein being denoted $m:d\to e$),
\item
for every object $d\in\ob(\%D)$ a distinguished {\em identity} morphism $1_d:d\to d$, and
\item
for every triple of objects $c,d,e\in\ob(\%D)$,
a {\em composition} operation $\circ:\:\Hom(d,e)\times\:\Hom(c,d)\to\:\Hom(c,e)$
\end{itemize}
with the usual associativity of $\circ$ and identity laws:
\begin{eqnarray*}
(l\circ m)\circ n&=&l\circ (m\circ n)\\
m\circ 1_c&=&1_d\circ m = m
\end{eqnarray*}
for all morphisms $l:d\to e$, $m:c\to d$, and $n:b\to c$.
We then note that a preorder is equivalent to a small category in which we constrain all of the homsets to contain at most one morphism. Essentially, each instance of a $\sqsubseteq$ relationship between two constraints $d,e$ becomes a morphism $d\to e$; if $d\sqsubseteq e$ does not hold, then we take $\:\Hom(d,e)$ to be empty. Each identity morphism is an instance of the reflexivity axiom, while the composition operation provides transitivity.
In the general categorical setting, we now have the possibility of having more than one morphism between constraints.
The morphisms in some sense are {\em named implications}: whereas before we were satisfied with the knowledge $c\sqsubseteq d$, i.~e., that $d$ is a stronger constraint than $c$, we now have a morphism $m:c\to d$ designating {\em how} we know that $c$ is a stronger constraint than $d$.
The identity and associativity laws provide proof rules
$$\matrix{\offinterlineskip\strut\cr
\noalign{\hrule}
\strut 1_d:d\to d\cr}\qquad\qquad
\matrix{\offinterlineskip\strut
m:c\to d\qquad n:b\to c\cr
\noalign{\hrule}
\strut n\circ m:b\to d\cr
whereby we can derive additional constraint implications.
Taking the join of a set of constraints corresponds in the categorical setting to taking a {\em colimit} of a diagram of constraints;
insisting that all joins be present is a matter of asserting {\em cocompleteness} for the corresponding category.
For a closed semiring, which has the additional operation $\otimes$,
the analogous notion is that of a {\em monoidal biclosed} category or a {\em symmetric monoidal closed} category in the case of a commutative closed semiring.
Here, we give only a brief, informal overview (see Mac~Lane \cite{Mac} for the full definition).
A {\em monoidal} category \%D is an ordinary category together with
These last conditions, known as {\em coherence} conditions, provide the actual ``monoid\-al'' structure.
It would be far too restrictive to actually require $(b\otimes c)\otimes d=b\otimes (c\otimes d)$, even though this happens to hold in the preorder case;
normally, the best we can do is stipulate that they be canonically isomorphic.
The coherence conditions are sufficient to guarantee that, for every pair
of functors $\%D^n\to\%D$ built up from $\otimes$ and $I$ that are equivalent in a monoidal sense,
i.~e., one is obtainable from the other via applications of the associativity and identity rules for monoids,
there is a unique natural isomorphism ---
that is, every means of arriving at such an isomorphism
via combinations of $\alpha$, $\lambda$, and $\rho$ yields the same morphism.
For a proof of this and a full discussion of coherence,
see Chapter 7 of Mac~Lane \cite{Mac}.
A {\em symmetric} monoidal category is one in which we have an additional natural isomorphism
\begin{displaymath}
\^g:c\otimes d\to d\otimes c
\end{displaymath}
and an additional set of coherence conditions.
In the case of a preorder category, these are again vacuously satisfied and the whole structure reduces to our previous notion of commutative ordered monoid.
A symmetric monoidal category is {\em closed} if, for all $c\in\ob(\%D)$,
the functor $c\otimes{-}$ (and hence, the functor ${-}\otimes c$ as well)
has a right adjoint.
If a functor possesses a right adjoint it preserves colimits.
Furthermore in the case of preorder categories, the converse holds as well.
Recall that in the preorder case, colimits are merely joins;
thus, the statement that $c\otimes{-}$ preserves joins is merely one of the distributive laws we saw earlier (\ref{csr:dist}).
It is then straightforward to see that closed semirings are the preorder case of symmetric monoidal categories.
\section{Schedules}
Given a temporal domain \%D,
describing the corresponding notion of schedule is straightforward.
\begin{definition}
A \%D-schedule, $\:p=(E,\^d)$ consists of a set $E$ and a function
Considering the second use of schedule map described above, one may
then describe a notion of system composition as follows.
We start with a collection of components each of which may be described by a
schedule.
These components may contain other (sub)components,
so we also provide maps giving the subcomponent/component relationships.
Specifically, we have a directed {\em graph} $G=(V,W)$, each of whose
vertices $v\in V$ is associated with some component and thence to the
schedule $\:p_v$ for that component;
each edge $w\in W$ has a source $w_0\in V$ and a target $w_1\in V$ and
a corresponding schedule map $f_w:\:p_{w_0}\to\:p_{w_1}$.
Call this collection $(V,W,(\:p_v)_{v\in V},(f_w)_{w\in W})$
a {\em diagram} of \%D-schedules.
The question then arises of what single schedule $\:p$ best describes this
entire system of components.
The following two properties should hold of this system schedule.
\begin{enumerate}
\item
All of the component schedules map into the system schedule $\:p$
in a consistent way.
Formally this means we have schedule maps
$g_v:\:p_v\to\:p$ for all $v\in V$,
and we require that for any edge $w\in W$ and any event $e\in E_{w_0}$
that $g_{w_1}(f_w(e))=g_{w_0}(e)$
(i.~e., no matter which component we view a particular event from, the
$g$ maps take it to the same event of $\:p$).
\end{enumerate}
\begin{definition}
The collection $(\:p, (g_v)_{v\in V})$ is said to be {\em consistent}
with the given diagram if it satisfies this property
($g_{w_1}\of f_w=g_{w_0}$ for all $w\in W$)
\end{definition}
\begin{enumerate}
\setcounter{enumi}{1}
\item
The schedule \:p is the least constrained such schedule.
That is, we now consider all schedules consistent with the given diagram;
given any alternative consistent schedule $\:p'$ with inclusions
$(g'_v:\:p_v\to\:p')_{v\in V}$, the least constrained schedule $\:p$
should have a map $h:\:p\to\:p'$ into it, and the inclusions should
agree, i.~e., $h(g_v(e))=g'_v(e)$ for all components $v\in V$ and all
events $e\in E_v$.
\end{enumerate}
One further desired requirement is that for any given
$\:p'$ with inclusions $(g'_v:\:p_v\to\:p')_v$ $h$ be unique.
Note first of all, that once we include this in our list of desired
properties for $\:p$, we have immediately that $\:p$ is unique up to
isomorphism (since if $\:p'$ also satisfies these properties, then we
must have a unique map $h':\:p'\to\:p$, and also unique maps $\:p\to\:p$ and
$\:p'\to\:p'$ which by uniqueness must be identities, so $hh'$ and $h'h$
are thus identities).
Secondly, we could also view the $g':\:p_v\to\:p'$ as representing a
family of observations of all of the components $\:p_v$, in which case
this last uniqueness requirement gives us a bijective correspondence
between such families of observations and single observations
$h:\:p\to\:p'$ of $\:p$.
Combining, we get
\begin{definition}
Given a diagram of \%D-schedules, $(V,W,(\:p_v)_{v\in V},(f_w)_{w\in W})$,
its {\em composition}, is a consistent $\:p, (g_v:\:p_v\to\:p)_{v\in V}$
such that for all other consistent $\:p', (g'_v:\:p_v\to\:p')_{v\in V}$
there is a unique $h:\:p\to\:p'$ such that $h\of g_v=g'_v$ for all
$v\in V$.
\end{definition}
In the terminology of category theory, a consistent $\:p', (g'_v:\:p_v\to\:p')_{v\in V}$ is actually a {\em cocone} (dual to a {\em cone} in which the morphisms point {\em away} from the apex object \:p),
and the single $\:p, (g_v:\:p_v\to\:p)_{v\in V}$ through which all others map
is nothing more or less than a {\em colimit} of the given diagram in the category of \%D-schedules.
A particular example for the case of preorder schedules has
(for some $e''\in E$), to eventually obtain the desired matrix
$\^d=\^d^{(\omega)}$ which does satisfy the axioms $(R)$ and $(T)$.
For physical \%D-schedules, the compositions need not exist in general.
However, we still have the following helpful result
\begin{theorem}\label{thm:phys}
If the composition of a given diagram of physical \%D-schedules exists, then this composition is isomorphic to the composition of the same diagram as (general) \%D-schedules.
\end{theorem}
\noindent This follows from
\begin{lemma}\label{lem:phys}
If $h:\:p\to\:p'$ is a general \%D-schedule map,
and \:p' is a physical \%D-schedule, then so is \:p.
Given a diagram, we know that the general \%D-schedule composition
$(\:p, g_v:\:p_v\to\:p)$ exists and that if the physical \%D-schedule
composition $(\:p',g_v':\:p_v\to\:p'$ exists,
then we have a (general \%D-schedule) map $h:\:p\to\:p'$ for which $h\of g=g'$.
Lemma \ref{lem:phys} then implies that $\:p$ must be a physical \%D schedule,
and thence that the maps $g_v$ and $h$ are all physical \%D-schedule maps,
from which it is fairly easy to conclude as before that $\:p$ and $\:p'$ must be isomorphic.
\end{proofl}
So, to find the composition of a diagram of {\em physical\/}
\%D-schedules, one can compute their composition as general
\%D-schedules, and, if that turns out to satisfy the $\^d(x,x)=I$
requirement, then this composition will be the physical \%D-schedule
composition as well.
An example of where this kind of theorem does not hold is the case of
the composition of a diagram of partial order schedules with monotone maps,
calculated in the realm of partial order schedules, i.~e., the category $\_\Pos$ of partially ordered sets and monotone functions,
which we take to be a subcategory of $\_2\hCat$,
the realm of preorder schedules.
Calculating the schedule composition directly in $\_2\hCat$
can yield a schedule that is quite different from what we get by calculating it in $\_\Pos$.
Consider the case where we have a diagram of three schedules,
$A\leftarrow C\to B$, each having the same underlying set of two events $\{e_1,e_2\}$ with the underlying set maps being the identity on this set, but with the events unordered in $C$, ordered $e_1\le e_2$ in $A$ and $e_2\le e_1$ in $B$.
Their composition as preorders yields a two-event schedule $\{e_1,e_2\}$ where both orderings hold, however this resulting schedule is {\em not} actually a partial order.
If the analogue to Theorem \ref{thm:phys} held,
this would imply that the partial order composition does not exist,
but actually, there is such a composition, namely a single event
schedule.
Notice that if we instead consider partial orders not as the subclass of
preorders consisting of those that happen to satisfy the antisymmetry axiom, but rather as the subclass of {\em prosset} orders (i.~e., in \_3\hPhys) consisting of those satisfying the axiom
$$\^d(x,y)=I\implies x=y$$
or equivalently
$$x\le y\implies x=y\lor x\prec y$$
then we {\em do} have a corresponding result, namely that {\em if} the composition of a diagram of partial-orders exists {\em then} it is isomorphic to their composition as general \_3-schedules.
This is a matter of going through the explicit construction above and noting that if the composed schedule has $\^d(x,y)=I$ for some pair of distinct events $(x,y)$, then one of the schedules comprising the composition must also have such a pair of events.
The reason this does not contradict the previous paragraph is that
while we are considering the same class of schedules,
the schedule {\em maps} we allow are only those which are {\em strictly}
monotone ($x\prec y\implies f(x)\prec f(y)$).
That is, our schedule category is not $\_\Pos$ (partially ordered sets and monotone functions) but rather $\_\Pos_{<}$ (partially ordered sets and {\em strict} monotone functions).
The eventual impact of the choice to use only strict monotone maps
is that it prevents distinct ordered events from being synchronized,
i.~e., mapped to the same event in the codomain.
As noted above, the composition of physical \%D-schedules does not always exist, that is to say, {\%D\hPhys} is not cocomplete.
Nor, for that matter, is $\_\Pos_{<}$.
Nevertheless, both {\%D\hPhys} and $\_\Pos_{<}$
satisfy a somewhat weaker condition, and this
will turn out to be sufficient to our needs
when considering the question of {\em process} composition in the next chapter.
\begin{definition}
A category \%C is {\em almost cocomplete} if, every diagram either
\begin{itemize}
\item
has a colimit, or
\item
has no co-cone at all, i.~e., for no object $\:q$ does there exist a collection of morphisms $(g_v:\:p_v\to q)_{v\in V}$ from the objects of the diagram $(\:p_i)_{v\in V}$ into $\:q$ such that the required commutativity properties hold.
\end{itemize}
\end{definition}
Restating this for schedule categories, we see that a schedule category is almost cocomplete precisely when every diagram of schedules either has a composition or is such that no schedule is consistent with it.
\begin{theorem}
If \%D is a closed monoidal category that is cocomplete,
then {\%D\hPhys} is almost cocomplete.
\end{theorem}
\begin{proof}
It suffices to show that if a co-cone $(\:p':g_v:\:p_v\to\:p')_{v\in V}$ over a given diagram exists, then a co-limit exists as well.
We already know that if we consider the corresponding diagram in \%D\hCat,
there will be a colimit $(\:p:g_v:\:p_v\to\:p)_{v\in V}$
and thus a unique (\%D\hCat) map $h:\:p\to\:p'$.
By Lemma \ref{lem:phys},
\:p and hence this co-limit is present in {\%D\hPhys} as well.
\end{proof}
\section{Labeled Schedules}
Analogous to the notion of labeled partial order or {\em pomset} as described in \cite{Pr84},
we have the notion of a {\em labeled schedule} $\<\:p,\^m,\^S>$.
Here $\:p=(E,\^d)$ is a schedule in the previous sense,
and $\^m:E\to\Sigma$ is a labeling map.
$\^S\in\Set$ is referred to as the {\em alphabet}.
As with ordinary schedules, we can make a category out of these,
here using the comma category construction \cite{Mac}.
\begin{definition}
Given categories $\%A,\%B,\%C$ and functors $V:\%A\to\%C, W:\%B\to\%C$
the {\em comma category} $(V\comma W)$ is the category for which
\begin{itemize}
\item
objects are triples $\<a,\^g,b>$, where $a\in\ob(\%A)$, $b\in\ob(\%B)$ and $\^g:Va\to Wb$ in \%C.
\item
morphisms are pairs $\<\^a:a\to a',\^b:b\to b'>$ for which the following diagram commutes:
$$\begin{diagram}
Va&\rTo^{V\^a}&Va'\\
\dTo^{\^g}&&\dTo^{\^g'}\\
Wb&\rTo^{W\^b}&Wb'\\
\end{diagram}$$
\item
$1_{\<a,\^g,b>}=\<1_a,1_b>$, while $\<\^a':a'\to a'',\^b':b'\to b''>\circ\<\^a:a\to a',\^b:b\to b'>=\<\^a'\circ\^a:a\to a'',\^b'\circ\^b:b\to b''>$.
\end{itemize}.
\end{definition}
Starting with a given category of ordinary schedules \%T,
the corresponding category of labeled schedules is then $(V\comma 1_{\_\Set})$,
where $V:\%T\to\_\Set$ is the forgetful functor taking an ordinary schedule
to its underlying event set and $1_{\_\Set}:\_\Set\to\_\Set$
is the identity functor.
If there is no ambiguity about the functors involved we denote this
$(\%T\comma\_\Set)$.
A morphism of labeled schedules $\<\:p,\^m,\^S>\to\<\:q,\^n,\^G>$
is then a pair of maps $\<f,\^f>$ where $f:\:p\to\:q$ is an (unlabeled) schedule map and
$$\begin{diagram}
V\:p&\rTo^{Vf}&V\:q\\
\dTo^{\^m}&&\dTo^{\^n}\\
\^S&\rTo^{\^f}&\^G\\
\end{diagram}$$
commutes.
Operations that are defined via universal constructions
(e.~g., concurrence or concatenation)
follow automatically from the definition of morphism.
\cite{CCMP} discusses these at length and in any case they are not
qualitatively different from the corresponding operations already
described for unlabeled schedules.
Carrying out this construction using $\%T=\_\Pos$ yields the category $\_\Pom$ of labeled partial orders (pomsets).
Of interest however are those operations arising from label manipulations.
For the rest of this section, \%B refers to a category of labeled
schedules arising from the comma construction on some category \%T of
ordinary schedules, be it \_\Pos, {\%D\hCat} or {\%D\hPhys} for some temporal
domain \%D,
For any particular alphabet $\^S\in\Set$, we can find
within $\%B$ the category $\%B_{\^S}$ of behaviors over that alphabet.
$\%B_{\^S}$ is actually $(V\comma\^S)$, where $\^S:\_1\to\_\Set$ is the functor taking the single object of \_1 to the set $\^S$.
Now, for any alphabet translation $t:\^G\to\^S$
there are two canonical functors we can define:
\begin{itemize}
\item
The {\em renaming} $t_*:\%B_{\^G}\to\%B_{\^S}$,
which takes a behavior $q=\<\:q,\nu,\^G>$ and relabels all of its
events to produce $\<\:q,t\nu,\^S>$.
$$\commdiag
{V(\:q=t_*\:q)&&\cr
\dTo{}{\mu}&&\cr
\^G&\rTo{t}{}&\^S\cr
\item The {\em restriction} $t^*:\%B_{\^S}\to\%B_{\^G}$,
which, in the case of $t$ being a monomorphism,
takes $p=\<\:p,\mu,\^S>$ to $\<\:p\cap\mu^{-1}t(\^G),t^{-1}\mu,\^G>$, that is, we take
$t^*p$ to be that subbehavior of $p$ whose events are those with labels
in $t(\^G)$. If, in fact, the underlying temporal structure
category $\%T$ {\em is} $\%D\hCat$, which is actually complete, we can
instead obtain the unlabeled schedule $t^*\:p$ by taking the pullback
$$\begin{diagram}
t^*\:p&\rTo{}{}&\:p\\
\dTo{}{}&&\dTo{}{\mu}\\
E\^G&\rTo{Et}{}&E\^S\\
\end{diagram}$$
Here, $E:\Set\to\%T$ is the right adjoint to {\%T}'s forgetful functor
(e.~g., the clique functor in the case of preorders). That this is an
equivalent construction is proven in \cite{CCMP}.
\end{itemize}
The projection $\^s:\%B\to\Set$ (taking any
behavior $\<\:p,\mu,\^S>$ to its alphabet $\^S$) is in fact a
{\it fibration} \cite{J83}.
For a more detailed discussion of this in the context of labeled
Petri-nets see \cite{Win88}.
In the case of general {\%T} where $t$ is not injective, it is not
necessarily the case that $t^*$ exists.
For our purposes, it will suffice to note that for any $t$ for which
$t_*$ and $t^*$ both exist, we have that $t^*$ is a right adjoint of
$t_*$, which is actually a stricter condition.
\comment{
In fact, in $\Pom$, $t^*$ never exists if $t$ is not injective.
This is one of a number of points of grief resulting from the fact
that the category of posets is actually a subcategory of the
{*** why we prefer Prom to Pom}
In general, we consider ourselves to have been given a category $\%B$,
understood to be some $(\mbox{\%T}\comma\^S)$ where $\%T$ is a suitable
subcategory of \%D\hCat.
The objects of $\%B$ are behaviors and whose morphisms can be
interpreted as relating behaviors to component behaviors (as above).
This category is equipped with a projection $\^s:\%B\to\Set$ (giving
the alphabet of actions associated with any given behavior)
and for each alphabet $\^S$ we also have the fibre category
$\%B_{\^S}$ consisting of behaviors of {\%B} over that particular
alphabet, or alternatively, the inverse image under $\^s$
of the object $\^S$ and its identity morphism.
We also assume for any $t:\^G\to\^S$ the existence of
translation ($t_*:\%B_{\^G}\to\%B_{\^S}$) and in the cases
where we need it, the existence of the restriction
($t^*:\%B_{\^S}\to\%B_{\^G}$) functor accompanied by an adjunction
$t_*\dashv t^*$.
Given our concern about colimits, we should note that if $\%T$ is
almost cocomplete and its forgetful functor
preserves colimits, then $\%B$ is also almost cocomplete\footnote
{In the case of $\Pom$, this result is not applicable, but
we still have cocompleteness.
However, colimits in {\Pom} do not respect the alphabets, that is,
$\^s:\Pom\to\Set$ does not preserve colimits.
For this reason we prefer, when considering partial orders
to either
\begin{enumerate}
\item
think of them as preorders, i.~e., work in \_2\hCat,
allowing for the possibility of distinct simultaneous events, or
\item
interpret the order relations as being strict,
i.~e., work in $\Pom_{<}=(\Pos_{<}\comma\Set)$,
\end{enumerate}
and avoid using {\Pom} altogether.}.
Likewise if $\%T$ is complete, then so is $\%B$.
One can then go on to define some basic behaviors and behavior
operations using colimits, limits, restrictions, translations.
Another source of operations is the lifting of the symmetric monoidal
structure $(\otimes,I)$ from {\%D} to {\%D\hCat} and thence to {\%B},
providing both of the latter categories with a $\otimes$
($\otimes_{\%B}$ is orthocurrence as in \cite{Pr86}) and an $I$.
Details of the constructions are in \cite{CCMP}.
\chapter{Processes}\label{chapter:proc}
\section{Choice and Process Morphisms}
The element missing from the the machinery presented thus far for schedules is that of nondeterminism or {\em choice}.
In particular, as we have presented it, a schedule carries the implicit assumption that all of the constituent events will be enacted.
If one wishes to indicate that an event need {\em not} occur in certain circumstances,
let alone the possibility that two events may mutually exclude each other,
additional structure is required.
It is this augmented schedule-with-choices notion that we shall refer to as a {\em process}.
Winskel \cite{NPW}, in his event structure model, provides for nondeterminism by producing a single schedule
(i.~e., the partial order case thereof) and annotating it with a {\em conflict} relation.
The interpretation placed on this is that only some conflict-free subset of the given events will occur.
Thus, a process that enacts one of the events $a$ or $b$ but not both would be depicted as a schedule
containing the events $a$ and $b$, but also containing the pair $(a,b)$ in its conflict relation.
Note that in this case we still have to provide temporal information relating $a$ and $b$ even though
conflict renders such information irrelevant.
While the event structure model provides for a fairly simple structure, its conflation of causal and temporal order is troublesome when attempting to derive generalizations in which the temporal information is given by something other than a partial order.
While we will not go so far as to claim that causality, unlike time, is fundamentally a binary relation, i.~e., either one event causes another or it does not, we do however suggest that extensions to this depiction of causality will not necessarily take the same form as the corresponding extension to the notion of time as a partial order.
The approach we shall take is the one adopted by Pratt \cite{Pr86} and Grabowski \cite{Gra} in the original pomset model, namely that which represents a choice by using distinct schedules.
In this framework, a process is then a {\em set} of schedules.
The interpretation here is that some schedule in the process' set will be followed,
i.~e., there exists a schedule in the process' set for which all events will occur in accordance
with its temporal constraints.
For example, the above process that can perform one of $a$ or $b$ is now a set containing two schedules,
each having but a single event.
Additional structure is clearly necessary, but for now we consider the implications of this set-of-sets notion of process by itself.
Henceforth, \%B shall refer to some category of schedules, labeled or unlabeled, as developed in the previous chapter (e.~g., {\%D\hPhys} for some suitable temporal domain \%D). \%B essentially serves as a parameter in the process construction.
\subsection{Processes as DNF Propositions}
So consider, for now, the individual schedules as bare sets, i.~e., ignoring any temporal information that may be present. We then have an outer set, which we shall call the {\em alternative} or {\em index} set, i.~e., the set of alternatives indexing the schedules from which the process has to choose.
The first observation is that this form of process is essentially a disjunctive normal form (DNF) representation.
That is, we have a set of disjuncts, namely the alternatives, each of which is itself a conjunction,
i.~e., a schedule of events which {\em all} occur.
Taken as a whole, the process, stripped of its temporal information, is simply a DNF proposition concerning which events will actually occur.
Since a major goal of this work is to extend the notion of schedule composition to process composition,
we should, in the course of establishing our notion of process, also establish a corresponding notion
of morphism from one process to another,
the intention being that these morphisms play exactly the same role for processes
as the above-described schedule morphisms do for schedules,
e.~g., relating an observation to a process or embedding one process in another.
With our processes being DNF propositions, a natural choice of morphism to use to relate these propositions is that of {\em implication}.
Consider the special case of conjunctions, where a suitable way to demonstrate that
is to exhibit a function $f:J\to I$ such that for all $j\in J$ we have $p_{f(j)}\implies q_j$.
Such a function provides an elementary proof by demonstrating that every conjunct on the right is implied by something on the left. Moreover, if the implication holds, there will exist a proof of this form, though there is certainly no claim about it being unique.
via a function $f:I\to J$ such that for all $i\in I$ we have $p_i\implies q_{f(i)}$.
Here, every disjunct on the left implies something on the right.
A category \%C determines a category of indexed families of objects of \%C via a standard construction.
\begin{definition}
For any category \%C, the category $\_\Fam(\%C)$ has objects of the form $\_C=(I,(C_i)_{i\in I})$ for some index set $I\in\_\Set$ and some collection of objects $C_i\in\ob(\%C)$,
and morphisms $\_f:\_C^1\to\_C^2$ of the form $(f:I^1\to I^2,(f_i:C^1_i\to C^2_{f(i)})_{i\in I^1})$, where $f$ is a function and the $f_i$ are morphisms of \%C. Composition and identity are given by $\_g\_f=(gf,(g_{f(i)}f_i)_{i\in I^1})$ and $1_{\_C}=(1_I,(1_{C_i})_{i\in I})$ respectively.
\end{definition}
That is, we provide a function between the index sets, and for every element of the domain's index set we provide a morphism relating the corresponding objects of \%C.
Thus, given a category \%A of atomic propositions and implications thereof, we can construct
\begin{itemize}
\item
$\_\Fam(\%A)^{op}$, the category of conjunctive propositions on \%A (i.~e., conjunctions of atomic propositions).
Notice that in the case of conjunctions, the direction of the implication runs opposite to that of the functions involved, thus the need for the ${-}^{op}$.
\item
$\_\Fam(\_\Fam(\%A)^{op})$, the category of disjunctions of propositions of $\_\Fam(\%A)$, or, equivalently, the category of DNF propositions on \%A.
\end{itemize}
We now wish to generalize the two-step construction above by replacing $\_\Fam(\%A)$ by the schedule category \%B. According to this generalization, a schedule can be interpreted as a generalized conjunction of event-occurrence propositions.
Therefore, it is not actually necessary to produce a category \%A of atomic event-occurrence propositions, since in the case of schedules, this provides no new information about the identities of events beyond what the schedule morphism itself already provides.
If a schedule morphism maps one event to another, those events are identified and thus the occurrence of one implies the occurrence of the other vacuously.
It is, however important to note that schedule morphisms run in a direction inverse to that of the implication between the corresponding propositions.
That is, if we have a schedule morphism $R\to S$, any occurrence of events satisfying $S$ necessarily satisfies $R$ as well. Thus, as propositions, $S\implies R$.
The actual category of schedules-as-propositions is then $\%B^{op}$, which suggests that the corresponding category of processes, i.~e., DNF propositions instead of pure conjunctive propositions, is going to be $\_\Fam(\%B^{op})$.
\subsection{Adding Structure to the Alternative Set}
Of course, we should now consider what the actual structure on processes should be.
While our main motive in using a DNF style of process is that of achieving a separation of causal and temporal information, the structureless set of schedules described thus far achieves, in some sense, much too complete a separation.
\begin{itemize}
\item
Having an unstructured set of choices removes the possibility of describing {\em when} a particular choice takes place. Thus we have the generally recognized advantage of {\em branch\-ing-time} models.
\item
There is nothing to relate events on distinct schedules within a process.
\end{itemize}
To elaborate on the latter objection, if, e.~g., one wishes to specify that, once events $a$ and $b$ occur, $c$ {\em may} occur, one may consider a process of two schedules whose event sets are $\{a,b\}$ and $\{a,b,c\}$.
However, unless we stipulate that all events used in schedules are drawn from some universal set of events,
there is nothing to indicate that the two $a$'s are supposed to refer to the same event
(``same'' in the sense that whether the event occurs has the same implications for observers of this process, no matter that it occurred in one schedule or the other).
The situation becomes even more obscure should we be using labeled schedules and thus might want to specify something like ``$c$ {\em may} occur once $a$ has occurred {\em twice},'' though this is perhaps more properly a hazard of using event labels to specify events.
Thus the additional structure on processes should establish the required identities between events of the constituent schedules.
We already have a mechanism to facilitate this, namely that of schedule morphisms.
The alternative set will then be (at least) preordered, the idea being that if an event $c$ may optionally occur but only given that some other collection of events has occurred, we essentially have two schedules (alternatives), one containing only the predecessor events, the other containing $c$ as well; the latter alternative being later in the ordering on alternatives.
This mapping of alternative orderings to schedule morphisms suggests viewing the alternative set itself as a category, thus allowing for the concise definition of process as being a functor into the schedule category \%B.
\begin{definition}
Given a category (of schedules) $\%B$, a {\em process} $P$ over $\%B$ is a
category $A_P$ together with a functor $P:A_P\to\%B$.
\end{definition}
Having upgraded the alternative set from a mere set to a category, we should also present the corresponding upgrading of the $\_\Fam(\%B^{op})$ notion of process morphism.
As might be expected, however, we have both the order structure on the alternative set and the various schedules' temporal constraints to contend with.
In this case, the function between the alternative sets becomes a functor and the family of schedule functions becomes a natural transformation.
\begin{definition}
A {\em process morphism} $f:P\to Q$ is a functor
$A_f:A_P\to A_Q$ together with a natural transformation
$f:QA_f\nato P:A_P\to\%B$.
We denote the category of processes over $\%B$ as \PROC{\%B}.
\end{definition}
As a matter of convention, we take the direction of the process morphism to be the same as that of the implication of the corresponding propositions, which also happens to match the direction of the underlying set function on the alternative sets.
It is only in the unfortunate situation of schedules and schedule morphisms that the directions of the set functions and the implications happen to be different.
Given that we wish to adhere to previous conventions for schedule morphisms, the natural transformation must necessarily be in the opposite direction.
The import of the naturality itself
$$\commdiag
{QA_f(p)&\rTo{f_p}{}&P(p)\cr
\dTo{}{}&&\dTo{}{}\cr
QA_f(p')&\rTo{f_{p'}}{}&P(p')\cr
is simply that if satisfaction of $P$ implies satisfaction of $Q$, $P$ should ``agree with'' any identification of events made between schedules of $Q$.
Otherwise an anomalous situation arises wherein for some pair of events of $P$, one has occurred, one has not, and $Q$ has both of these events identified.
Despite the categorical machinery thus far invoked, we have not yet introduced any real structure since while this definition allows for the introduction of a nontrivial ordering on $A_P$ and indicates how, in such a case, these orderings should be reflected in the constituent schedules, it does not require the ordering to be anything other than discrete.
Further stipulations will be necessary to obtain a more recognizable notion of process.
To that end, we now consider the various subcategories of \PROC{\%B} in which our true interest lies.
If we consider only processes $P$ for which $A_P$ is a discrete category,
we are, as noted above, back to our previous definition which
corresponds to the usual notion of process as a set of schedules with
no further relationships between the various alternative schedules.
We will refer to these as discrete processes and the corresponding
(full) subcategory as \PROCD{\%B}.
While there is implicit in our choice of a DNF form for process a general desire to separate causal from temporal information, one can only take this so far.
In particular, it is still generally accepted that cause precedes effect.
In fact, it is only the converse implication of temporal order implying causation that we wish to suppress.
Thus, we would like a stipulation that requires the schedule corresponding to a later alternative in the
alternative set ordering to contain all of the events of any predecessor schedules and furthermore to not introduce any tighter temporal constraints between any pairs of such events.
For this purpose we develop the notion of prefix.
\subsubsection{Prefixes}
As noted earlier, if the structure of the process does not relate two of its alternative schedules
in any way, then the choice between them is essentially made at the beginning of time.
If we could identify events on separate schedules within a process,
this would provide a way of representing a {\em deferred} choice
in the sense that an occurrence of an event or collection of events common to both schedules
does not necessarily commit the process to be following one or the other of the schedules in question.
That is, the choice made between the schedules is not seen to take place until either
\begin{itemize}
\item
we have the occurrence of some event that is present in only one of the schedules
\item
some event common to both schedules occurs, but the timing of the occurence
violates the constraints given by one of the schedules.
\end{itemize}
However, given that the morphisms we use to identify events have an implicit directionality to them,
we prefer to view the general situation of two schedules sharing some collection of events as rather
that of two schedules sharing a common predecessor schedule.
In other words, we have 3 schedules, one which the process follows ``before'' the choice is made and the other two which represent the possible outcomes of the choice.
This view decomposes all choices into predecessor-successor pairs, i.e.,
all choices are now of the form, ``Either we stay with this schedule or we advance to the next one.''
What remains is to clarify the relationship between a schedule $\:p$ and any successor schedule $\:p'$.
In the realm of partial orders, a prefix of an order $\:p$
is a downward closed subset $\:p_0$, i.e., such that for any elements $x,y\in \:p$,
if we have that $x\le y$ and $y\in \:p_0$ then $x$ must be in $\:p_0$ as well.
This can equivalently be represented as a pullback
$$\begin{diagram}
\:p & \rInto & \:p'\\
\dTo^! && \dTo_{\chi}\\
\:1 & \rTo^0 & \:2\\
\end{diagram}
where \:2 is the usual two-element order $\{0,1|0\le1\}$.
The pullback essentially makes \:p be the inverse image under $\chi$ of $0$
and it is the ordering on \:2 that forces the prefix property.
One complication in translating this to the more general schedule frameork
is that the terminal object \:1 may not actually exist in \%B.
For example, this is usually the case if we take $\%B=\%D\hPhys$.
Likewise the self-constraints in \:2 will need to be terminal in order for these pullbacks to exist in general.
With $\%D\hPhys$ we can, however, consider the corresponding diagram to be in $\%D\hCat$. $\:1$ and $\:2$ thus need only be objects of \%D\hCat.
The general definition of prefix is then as follows:
\begin{definition}
\def\Bc{\overline{\%B}}
Given a category of schedules \%B, a {\em prefix classifier} for \%B is a map $k:\:1\to\:2$ in a category $\Bc$ of schedules, such that
\begin{itemize}
\item
$\Bc$ is complete and contains \%B as a full subcategory,
\item
the inclusion $i:\%B\to\Bc$ is a monoidal functor
(i.e., preserves $\otimes$ and \:I)
and respects the forgetful functors into {\_\Set},
i.~e., $V_{\Bc}\circ i=V_{\%B}$
\item
$\:1$ is a terminal object in $\Bc$.
\item
$\:2$ is a schedule of $\Bc$ having two events $v$ and $w$, with $\delta_{\:1}(u,u)=\delta_{\:2}(v,v)$,
in which $\:q_i$ is the $\chi_i$-prefix of $\:p_i$ and the $\:g_i$ are the
canonical maps, has the colimit $(\:q,\kappa_i: \:q_i\to \:q)$, where $\:q$ is
the $\chi$-prefix of $\:p$.
\end{theorem}
\noindent
In other words, the colimit of a diagram of prefixes is a prefix of the colimit.
Having fixed a particular prefix classifier in the schedule category $\%B$
we can then consider the following class of processes:
\begin{definition}
A process $P$ is {\em prefix ordered} iff
\begin{itemize}
\item
$A_P$ is a preorder
\item
For all $p,p'\in A_P$: $p\le p'$ implies $P_{pp'}:P(p)\to P(p')$ is a prefix morphism.
\end{itemize}
A process morphism $f:P\to Q$ is {\em prefix preserving} iff for each $p\le p'$ in $A_P$, the naturality-of-$f_p$ diagram
$$\commdiag
{QA_f(p)&\rTo{f_p}{}&P(p)\cr
\dIntoA{}{}&&\dIntoA{}{}\cr
QA_f(p')&\rTo{f_{p'}}{}&P(p')\cr
is a pullback in ${\%B}$.
\end{definition}
The significance of the pullback requirement for prefix-preserving morphisms is that it eliminates the possibility of an event appearing in an ``early'' alternative of $P$ without appearing in the corresponding ``early'' alternative of $Q$.
That is, processes joined by a morphism should agree on the causal precedence of events that they have in common.
Prefix ordered processes and prefix preserving morphisms form a category which we will denote (\PROCK{\%B}).
\subsubsection{Branching Time}
The next observation is that prefix ordered processes allow for the presence of ``useless'' alternatives.
\begin{definition}
Given a schedule category {\%B} having an initial object $0$ and satisfying the condition
$$s\in\ob(\%B)\hbox{there exists some morphism $s\to 0$}\implies\hbox{$s$ is initial in \%B}$$
a {\em branch\-ing-time} process over \%B is a prefix ordered process $P$ for which
\begin{itemize}
\item
$P(p)=0$ implies $p$ initial in $A_P$
\item
$p\le p'\in A_P$ implies that either $p=p'$ or the corresponding prefix map
$P(p)\hookrightarrow P(p')$ is proper.
\end{itemize}
\PROCB{\%B} denotes the full subcategory of \PROCK{\%B} consisting of branch\-ing-time processes.
\end{definition}
The intuitive content here is that in any ordered sequence of alternatives, each successive schedule should ``make progress'' in the sense of each one having additional events not present in its predecessor.
One might wish also to merge {\em every} pair of isomorphic schedules within a process, no matter how they are arrived at, thus requiring processes to be actual sets rather than the multisets thus far described.
Unfortunately, for some choices of schedule category \%B, it is possible for a pair of schedules to be isomorphic without being {\em canonically} isomorphic.
Without further stipulations on the structure of a process,
the only case where such a canonical isomorphism is available is that of the initial schedule, $0$,
so we can at least merge any initial alternatives that might exist (as we do above).
The lack of canonical isomorphisms in general would defeat the construction of process limits
as we shall discuss when we present that construction.
The condition imposed on \%B is true of all of the examples we have seen thus far
The conditions on a branch\-ing-time process $P$ together imply that
\begin{itemize}
\item
there is at most one object in $A_P$, which we will call $\bot$ if it exists, for which $P(\bot)=0$.
\item
$A_P$ is a partial order, for the second condition above enforces the antisymmetry axiom.
\end{itemize}
Finally, note that morphisms
$f:P\to Q$ between such processes will automatically be strict,
that is, $A_f(\bot_P)=\bot_Q$.
\subsubsection{Closure}
For some applications, it will be necessary to be able to infer a particular infinite schedule from the corresponding sequence of finite schedules that have it as a limit.
The stipulations for the various types of process presented thus far do not require such infinite schedules to ever exist.
\begin{definition}
Given a category \%B (of schedules) having all filtered colimits,
a branch\-ing-time process $P$ over \%B is {\em closed} iff for any directed subset $S$ of $A_P$,
there exists a least upper bound $\bigvee S$ in $A_P$ satisfying $P(\bigvee S)=\colim_{s\in S}P(s)$.
A morphism between closed processes $f:P\to Q$ is {\em continuous} iff
$A_f$ is continuous, i.~e., preserves joins of directed subsets.
We denote the category of closed branch\-ing-time processes and continuous morphisms as \PROCBC{\%B}.
\end{definition}
Because of the naturality of $f:QA_f\nato P$, we have that for any directed set $S$ in $A_P$, the map
Here, the word ``closed'' is being used in its topological sense, that of a closed set containing its limit points, rather than the categorical sense, that of possessing a right adjoint to the tensor product. The latter sense would be meaningless here, given that we do not even view a process $P$ as a category, let alone have a definition of tensor product on $P$.
\subsubsection{Labels}
If we are considering processes over a category of labeled schedules (e.~g., $\%B=(\%D\hCat\comma\_\Set)$),
we have to contend with the orthogonal issue of the alphabet from which the process draws its labels.
One generally expects that all of the constituent schedules of a given process will be over the same alphabet.
While this can be dealt with simply by taking \%B to be $\%B_{\^S}$, a category of labeled schedules over a single alphabet \^S, this does not allow for morphisms between processes over distinct alphabets as will often arise in the course of composing processes, i.~e., one should not be compelled to specify a full system alphabet in advance.
\begin{definition}
Given a category of labeled schedules $\%B$, i.~e., with a projection $\sigma:\%B\to\Set$,
a process over $\%B$ is {\em alphabetically coherent} if it is also a process over $\%B_{\Sigma}$ for some alphabet $\Sigma$.
Given alphabetically coherent processes $P$ over $\%B_{\Sigma}$ and $Q$ over $\%B_{\Gamma}$,
a process morphism (in \PROC{B}) $f:P\to Q$ is likewise {\em alphabetically coherent} if there
exists a translation $t:\Gamma\to\Sigma$ such that, for all $p\in A_P$,
$\sigma(f_p:QA_f(p)\to P(p))=t$.
\CPROC{\%B} denotes the subcategory of \PROC{\%B} consisting of alphabetically coherent processes and process morphisms.
\end{definition}
As noted, this question of alphabetical coherence is orthogonal to the question of the particular alternative-set structure we wish to use.
One may thus consider corresponding categories of alphabetically coherent processes, namely the categories of discrete, prefix ordered, branch\-ing-time or closed branch\-ing-time such processes, respectively ${\cal P}^{\bullet}_d[\%B]$, ${\cal P}^{\bullet}_k[\%B]$, ${\cal P}^{\bullet}_{k\bot}[\%B]$, and \CPROCBC{\%B}.
\section{Properties}
\subsection{Pointwise Lifting of Schedule Functors}
{\em Pointwise lifting} refers to the notion of taking a unary schedule operation and applying it uniformly to all of the constituent schedules of a given process.
We need to establish what we mean by ``uniformly'' and present sufficient conditions on the schedule operation so that such a lifting will be possible.
Essentially, if we are given a functor $F:\%B\to\%B'$ between two categories of schedules,
we can apply this functor to all of the constituent schedules of a given process over $\%B$
to yield a corresponding process over $\%B'$.
\begin{definition}
Given a functor $F:\%B\to\%B'$, the {\em lifted} functor $F_{\#}:\PROC{\%B}\to\PROC{\%B'}$
is defined as follows: For any process $P\in\PROC{\%B}$,
$F_{\#}P$ is given by $A_{F_{\#}P}=A_P$ (i.~e., same index set)
and $F_{\#}P(:A_P\to\%B')=FP$. For any morphism $f:P\to Q$,
$F_{\#}f$ is given by $A_{F_{\#}f}=A_f$ (i.~e., same index function)
and $(F_{\#}f)_p (:FQA_f(p)\to FP(p))=Ff_p$.
\end{definition}
\nopagebreak
\noindent
Functoriality of $F_{\#}$ follows almost immediately from functoriality of $F$.
Similarly, natural transformations lift as well.
\nopagebreak
\begin{definition}
Given a natural transformation $\^h:F\nato G:\%B\to\%B'$, the {\em lifted}
natural transformation $\^h_{\#}:G_{\#}\nato F_{\#}:\PROC{\%B}\to\PROC{\%B'}$ is defined
by specifying the morphisms $\^h_{\#P}:G_{\#}P\to F_{\#}P$ as follows
\begin{itemize}
\item
$A_{\^h_{\#P}}=1_{A_P}$, recalling that $A_P=A_{G_{\#}P}=A_{F_{\#}P}$.
\item
$(\^h_{\#P})_p = \^h_{P(p)}$
\end{itemize}
\end{definition}
As one might expect, naturality of $\^h_{\#}$ follows almost immediately from naturality of $\^h$. Applying this lifting to the unit and counit transformations of an adjunction, we get
\begin{proposition}\label{prop:liftadj}
If $F:\%B\to\%B'$ is the left (right) adjoint of $G:\%B'\to\%B$, then $F_{\#}:\PROC{\%B}\to\PROC{\%B'}$ is the right (left) adjoint of $G_{\#}:\PROC{\%B'}\to\PROC{\%B}$.
\end{proposition}
\begin{proofo}
If $F$ is the left adjoint of $G$,
we have unit $\^h:1_{\%B}\nato GF$ and counit $\^e:FG\nato 1_{\%B'}$
natural transformations satisfying the usual triangular identities
\begin{eqnarray*}
\^e_{F\:X}\circ F\^h_{\:X}&=&1_{F\:X}\\
G\^e_{\:Y}\circ\^n_{G\:Y}&=&1_{G\:Y}\\
\end{eqnarray*}
These transformations lift to $\^h_{\#}:(GF)_{\#}\nato 1_{\PROC{\%B}}$ and $\^e_{\#}:1_{\PROC{\%B'}}\nato (FG)_{\#}$. Noting that $((GF)_{\#}=G_{\#}F_{\#})$ and $((FG)_{\#}=F_{\#}G_{\#})$, the triangular identities can likewise be shown to lift as well to
which suffices to demonstrate that $F_{\#}$ is a right adjoint of $G_{\#}$ with
$\^e_{\#}$ and $\^h_{\#}$ being the unit and counit, respectively.
\end{proofo}
In particular, for the case of \%B being a category of labeled schedules and for a given
alphabet translation $t:\Gamma\to\Sigma$,
we recall the corresponding translation and restriction schedule functors
$t_*:\%B_{\Gamma}\to\%B_{\Sigma}$ and $t^*:\%B_{\Sigma}\to\%B_{\Gamma}$.
We can then lift these to process functors
$t_{\#}:\PROC{\%B_{\Gamma}}\to\PROC{\%B_{\Sigma}}$ and
$t^{\#}:\PROC{\%B_{\Sigma}}\to\PROC{\%B_{\Gamma}}$ in the above manner.
As a matter of simplifying the notation, it is understood that $t_{\#}$ and $t^{\#}$ are abbreviations for
$t_{*\#}$ and $t^*_{\#}$ respectively.
Given the adjunction $t_*\dashv t^*$, we can then infer a corresponding adjunction $t^{\#}\dashv t_{\#}$
from Proposition \ref{prop:liftadj}.
In the subcategory of prefix ordered processes, we need additional conditions on the supplied functor
$F:\%B\to\%B'$ in order that the resulting lifted functor $F_{\#}:\PROCK{\%B}\to\PROCK{\%B'}$ indeed maps into \PROCK{\%B'}, i.~e., that the constructions for $F_{\#}P$ and $F_{\#}f$ indeed produce a prefix ordered process and a prefix preserving morphism when given prefix ordered $P$ and prefix preserving $f$.
The following set of conditions is sufficient to guarantee this:
\begin{itemize}
\item
$F$ preserves pullbacks.
\nobreak
\item
\nobreak
If $k:1\to 2$ is the prefix classifier for \%B, then $Fk:F1\to F2$ is a prefix in $\%B'$.
\end{itemize}
Likewise, a natural transformation $\^h:F\nato G:\%B\to\%B'$ of schedule functors will only lift if
its naturality diagram
$$\commdiag
{Fs&\rTo{\^h_s}{}&Gs\cr
\dIntoA{}{}&&\dIntoA{}{}\cr
Fs'&\rTo{\^h_{s'}}{}&Gs'\cr
is a pullback in the case where the $s\to s'$ morphism is a prefix.
This is required in order for $\^h_{\#P}$ to actually be a prefix preserving morphism for any $P\in\PROCK{\%B}$.
Sufficient conditions to guarantee this are the aforementioned conditions on $F$
together with the condition that
$$\commdiag
{F1&\rTo{\^h_s}{}&G1\cr
\dIntoA{Fk}{}&&\dIntoA{}{Gk}\cr
F2&\rTo{\^h_{s'}}{}&G2\cr
be a pullback.
In the case of branch\-ing-time processes,
though we don't need any additional conditions on $F$ beyond that for the prefix ordered case,
there is more to do to lift a functor $F:\%B\to\%B'$
to a corresponding {\em induced} functor $F_{\#\bot}:\PROCB{\%B}\to\PROCB{\%B'}$.
For any process $P\in\PROCB{\%B}$, we can construct an equivalence relation $R$ on $A_P$
generated by the set of all pairs $(p,p')$ in $A_P$ for which
\begin{itemize}
\item
$p\le p'$ and $FP(p\to p')$ is an isomorphism, or
\item
$FP(p)$ and $FP(p')$ are both initial.
\end{itemize}
Then the lifted {\em branching time} process, which we shall also denote $F_{\#}P$,
is then a process whose alternatives are given by the quotient category $A_P/R$.
We then take a representative $p$ of each $R$-equivalence class
$[p]$ and have $F_{\#}P:A_P/R\to\%B$ be the functor taking $[p]$ to $FP(p)$.
This construction is not canonical, since for any particular equivalence class of alternatives,
there may be a variety of choices for p, though the functoriality of $P:A_P\to\%B$
guarantees a canonical isomorphism between any pair of such choices.
Thus, all of the possible $F_{\#}P$'s we can construct will themselves be canonically isomorphic.
For any morphism $f:P\to Q$, showing that there exists a well-defined morphism
$F_{\#}f:F_{\#}P\to F_{\#}Q$
is a matter of showing that $A_f$ maps $R$-equivalent alternatives of $A_P$ to equivalent alternatives of $A_Q$.
We have already noted that $A_f$ is strict which handles the case of alternatives equivalent by virtue of being initial.
The case of noninitial $p\le p'$ and $FP(p\to p')$ being an isomorphism, is covered by $F$ preserving prefixes and prefix pullbacks, i.~e., the pullback in the naturality of $f$ diagram is preserved, thus guaranteeing that
$FQ(A_fp\to A_fp')$ is also an isomorphism.
Lifting a natural transformation $\^h:F\nato G$, is similarly complicated
but likewise involves no additional stipulations.
In the general prefix ordered case, we could rely on $A_P=A_{G_{\#}P}=A_{F_{\#}P}$,
and thus have $A_{\^h_{\#P}}$ be the identity.
In the branching time case, various alternatives of $A_{F_{\#}P}$ and $A_{G_{\#}P}$
will be identified in which case we need to ensure that we can still define the map $A_{\^h_{\#P}}:A_{G_{\#}P}\to A_{F_{\#}P}$.
The essential observation is that if two alternatives $p$ and $p'$ of $A_P$ are identified in $A_{G_{\#}P}$,
this happens in one of the following two ways:
\begin{itemize}
\item
$GPp$ and $GPp'$ are both initial, in which case $FPp$ and $FPp'$ have to be as well,
\item
$p\le p$ and $GPp\hookrightarrow GPp'$ is an isomorphism, but then, since
$$\commdiag
{FPp&\rTo{\^h_{Pp}}{}&GPp\cr
\dIntoA{}{}&&\dIntoA{}{}\cr
FPp'&\rTo{\^h_{Pp'}}{}&GPp'\cr
is a pullback, we have $FPp\hookrightarrow FPp'$ being an isomorphism as well.
\end{itemize}
In each case, it is seen that the offending alternatives must also be identified in $A_{F_\#P}$.
Therefore the identity on $A_P$ factors through the quotient to yield the desired alternative-set map
In the case of closed branch\-ing-time processes $F$ lifts as for branch\-ing-time processes.
Hence, we need only that $F$ preserves filtered colimits to ensure that the $F_{\#}(P)$ will be closed.
\subsection{Process Limits and Coproducts}
We now turn to the question of limits and colimits in the process category, starting with a result about the general process category \PROC{\%B}.
As a preliminary observation, notice that there is a forgetful functor $A:\PROC{\%B}\to\_\Cat$ that discards the schedule information,
i.~e., takes a process $P$ to its alternative set (category, actually) $A_P$ and a morphism $f:P\to Q$ likewise to its alternative set map $A_f:A_P\to A_Q$.
\begin{lemma}\label{lem:aladj}
If \%B has a terminal object, $A:\PROC{\%B}\to\_\Cat$ has a left adjoint.
\end{lemma}
\begin{proof}
Define $L:\_\Cat\to\PROC{\%B}$ as follows:
For any category $C$, the process $L_C$ is that whose alternative category $A_{L_C}$ is $C$ and whose schedules are all terminal,
i.~e., $L_Cc=1_{\%B}$ for all $c\in\ob(C)$ with morphisms of $C$ going to the canonical terminal schedule maps.
That this is a left adjoint is shown by exhibiting, for any $C\in\_\Cat$,
a universal arrow $C\to A_{L_C}(=C)$ which we take to be the identity.
For any process $P\in\PROC{\%B}$ and any functor $f:C\to A_P$,
we take $g:L_C\to P$ to be such that $A_g=f$ and such that the associated schedule maps $g_c:Pfc\to L_Cc(=1)$ be the terminal map.
Clearly, $g$ is the only morphism for which $A_g\circ 1_C=f$.
\end{proof}
From this, we learn that the $A$ functor preserves limits,
i.~e., the alternative category of a limit of processes will necessarily
be the limit of the corresponding alternative categories in \_\Cat.
\begin{theorem}\label{thm:plim}
For any category $\%B$ having all colimits of type $J^{op}$,
\PROC{\%B} has all limits of type $J$.
\end{theorem}
\begin{proof}
\mathcode`\|="326A
\mathcode`\;="303B
\mathcode`\:="603A
Consider a diagram $J$ in $\PROC{\%B}$ with processes
$(P^i| i\in J)$ and morphisms
$(f^m:P^i\to P^j|m:i\to j\in J)$.
Just to provide names we will denote the limit as
$(P; \pi^i:P\to P^i | i \in J)$ (that is, $P$ is the object and the
From the lemma, we know that the alternative category $A_P$ of the limit
is obtained by taking the limit of the diagram $(A_{P^i};A_{f^m})$ in $\_\Cat$.
This limit in {\_\Cat} also gives us a collection of projections which we can use for the alternative portion ($A_{\pi^i}:A_P\to A_{P^i}$) of the morphisms $\pi^i$.
As a consequence of the construction of limits in \_\Cat,
a typical object $p$ of $A_P$ will be a cone
$(p_i:\{\bullet\}\to A_{P^i} | i\in J)$,
or equivalently, a collection of objects $(p_i\in A_{P^i})_{i\in J}$
such that $p_j=A_{f^m}p_i$ for all $m:i\to j$ in $J$.
This is a diagram we've already seen, namely the diagram under the single point cone $A_{\^x}q$ whose colimit is $(PA_{\^x}q; \pi^i_{A_{\^x}q}P^i{q_i}\to PA_{\^x}q)$.
Now $Qq$ and the various $\^t^i_q$ provide us a co-cone over this diagram,
so there is a unique map $\^x_q:PA_{\^x}q\to Qq$ factoring this co-cone through the colimit.
The provision of these $\^x_q$ for all alternatives $q\in A_q$ completes the definition of $\^x:Q\to P$ and it should be clear that $\^x$ is unique.
\end{proof}
\vbox
{Considering all possible diagrams $J$ collectively, we obtain
\begin{corollary}
$\%B$ cocomplete implies $\PROC{\%B}$ complete.
\end{corollary}}
Recall, however, that some of the schedule categories we use, $\%D\hPhys$ in particular, are not necessarily cocomplete.
Fortunately, we can obtain a stronger result applicable to schedule categories that satisfy a weaker condition than cocompleteness.
In the above proof, consider the case of a particular diagram
of schedules for which there exists no co-cone at all.
Suppose now that this very diagram arises in the course of a limit construction, say as the schedule diagram underlying some cone $p$ of $A_P$.
Then, for any process cone $(Q; \^t^i:Q\to P^i | i \in J)$, it will not be possible for any alternative $q$ of $A_Q$ to be mapped by the resulting $A_{\^x}$ into the cone $p$, i.~e., to have $A_{\^t^i}q=p_i$ for all $i \in J$.
If it were possible, then there would be a corresponding co-cone down in the schedule category $\^t^i_q:P^ip^i\to Qq$ even though none exists for that particular diagram.
Thus, if the troublesome alternative $p$ were to be left out of $A_P$, it would still be the case that all cones $Q$ over the process diagram would factor through $P$.
In the case of a category \%B which is {\em almost co-complete}, we need only revise our limit construction as follows.
Construct the limit category $A'$ over the diagram of $(A_{P^i};A_{f^m})$ in $\_\Cat$.
Recall that the objects of $A'$ are single-point cones over $(A_{P^i};A_{f^m})$.
Take $A_P$ to be that subcategory of $A'$ consisting only of those single-point cones $p=(p^i\in A_{P^i})_{i\in J}$ for which the \%B-diagram $(P^ip^i;f^m_{p_i}:P^jp_j\to P^ip_i)$ has a co-cone and hence, by almost-cocompleteness, a colimit as well.
Given this choice of $A_P$, for all $p\in A_P$ we take $Pp$ to be the colimit schedule as before.
Note that in this case $A_P$ is not necessarily the limit of the $A_{P^i}$ in {\_\Cat} but rather a subcategory thereof, contrary to what we inferred from Lemma \ref{lem:aladj}.
This is not a contradiction; Lemma \ref{lem:aladj} requires \%B to have a terminal object, which is not the case for, e.~g., \%B=\%D\hPhys.
Indeed if \%B were to have a terminal object, then almost co-completeness would imply completeness since the terminal object provides {\em every} diagram with a cocone.
\vbox
{To summarize:
\begin{theorem}\label{thm:plima}
If $\%B$ is almost $J^{op}$-cocomplete, \PROC{\%B} is $J$-complete.
\end{theorem}}
Since {\%B} invariably has an initial object, \PROC{\%B} likewise has a terminal object $1$ having a single alternative $\bot$ mapping to the initial
behavior.
Essentially, this is the {\sc Skip} (or {\sc Idle}) process having a single behavior in which
nothing happens.
In the situation where \%B a is category of labeled schedules,
if we wish to construct a limit of alphabetically coherent processes in \CPROC{\%B},
we can use the above construction
provided the resulting limit process can be shown to be itself alphabetically coherent.
\begin{corollary}
If $\%B$ is almost $J^{op}$-cocomplete and $\sigma:\%B\to\Set$ preserves
$J$-colimits, \CPROC{\%B} is $J$-complete.
\end{corollary}
\begin{proofo}
Given a diagram of type $J$ in \PROC{\%B}
all of whose objects and morphisms are coherent,
the limit (together with all of the projections) will turn out to be
coherent since every diagram of behaviors of which we have to take a
colimit has the same underlying diagram of alphabets.
The preservation of colimits by $\sigma$ means that the alphabet of the colimit and the various injections into it are the same in all cases.
\end{proofo}
It should be noted that, in the case of $\%B=\Pom$, $\cod:\Pom\to\Set$ does
not preserve certain coequalizers \cite{CCMP} and thus fails the conditions of this theorem.
One can indeed construct counterexamples in which a limit of pomset processes in $\PROC{\_\Pom}$ is not alphabetically coherent.
These problems do not arise if we instead were to use $\_\Pom_{<}$, the category of pomsets with strict morphisms.
Though $\_\Pom_{<}$ is not cocomplete like \_\Pom, it is however {\em almost}-cocomplete.
Furthermore, $\^s:\Pom_{<}\to\Set$ preserves colimits in those cases where they actually exist,
thus making \CPROC{\_\Pom_{<}} complete.
\begin{corollary}
If $\%B$ is almost $J^{op}$-cocomplete,
\nobreak
\begin{enumerate}
\item\label{it:d}
$\PROCD{\%B}$ is $J$-complete.
\item\label{it:k}
$\PROCK{\%B}$ is $J$-complete.
\item\label{it:b}
$\PROCB{\%B}$ is $J$-complete.
\item\label{it:bc}
$\PROCBC{\%B}$ is $J$-complete.
\end{enumerate}
\end{corollary}
\begin{proof}
(\ref{it:d}) follows from the fact that a limit over a diagram of discrete categories is necessarily discrete, since the existence of a nontrivial morphism between single-point cones implies a nontrivial morphism somewhere in the diagram.
For (\ref{it:k}) this follows immediately from prefixes preserving colimits
(Theorem \ref{th:pos}).
For (\ref{it:b}), it suffices to demonstrate that the two conditions on branch\-ing-time processes hold for the limit process, provided they hold for all of the processes in the diagram.
For this we use the notation from the proof of Theorem \ref{thm:plim}.
Since $s\to 0$ in {\%B} implies $s$ initial,
a noninitial object $Pp^i$ in a colimit diagram guarantees the colimit schedule $Pp$ to be noninitial via the injection $P^i\^p^i$.
Thus there can be at most one initial alternative in the limit process $P$ by virtue of there being at most one initial alternative in each of the processes $P^i$.
To establish the condition that $P$ takes nontrivial alternative orderings to proper prefixes, we note that having the injection $\^p^i:P\to P^i$ be a prefix preserving morphism means that
\begin{diagram}
P^ip^i&\rTo{\^p^i_p}{}&Pp\\
\dIntoA{}{}&&\dIntoA{}{}\\
P^i{p'}^i&\rTo{\^p^i_{p'}}{}&Pp'\\
\end{diagram}
is a pullback and thus if $P^ip^i\hookrightarrow P^i{p'}^i$ is a proper prefix for some $i\in J$, then so must $Pp\hookrightarrow Pp'$.
For (\ref{it:bc}), it suffices to recall that colimits commute with each other (see Mac~Lane \cite{Mac}, ch.~9).
\end{proof}
\noindent
Similar results for the coherent process categories follow in much
the same way.
It will also be of interest to know when lifted schedule functors preserve process limits
so we should also mention
\begin{corollary}
Given any functor $F:\%B\to\%B'$ preserving colimits of type $J^{op}$,
$F_{\#}:\PROC{\%B}\to\PROC{\%B'}$ preserves limits of type $J$.
\end{corollary}
\noindent
This follows immediately from the construction of process limits.
The corresponding results for the special cases also hold.
\begin{corollary}
Given any functor $F:\%B\to\%B'$ preserving prefixes and colimits of
type $J^{op}$,
\begin{itemize}
\item
$F_{\#}:\PROCK{\%B}\to\PROCK{\%B'}$ preserves limits of type $J$.
\item
$F_{\#}:\PROCB{\%B}\to\PROCB{\%B'}$ preserves limits of type $J$.
\item
if $F$ preserves all filtered colimits,
$F_{\#}:\PROCBC{\%B}\to\PROCBC{\%B'}$ preserves limits of type $J$.
\end{itemize}
\end{corollary}
We now consider the question of colimits in the process categories.
In the case of coproducts, at least, the results are even simpler than
for limits.
\begin{theorem}
For {\em any} category $\%B$, \PROC{\%B} has an initial object and all (small)
coproducts.
\end{theorem}
\begin{proof}
Given a $J$-indexed collection of processes $\{P^i\in\PROC{\%B}:i\in J\}$,
by the cocompleteness of $\_\Cat$, we can construct $A_P=\coprod_{i\in J} A_{P^i}$ with injections $A_{\^i^i}:A_{P^i}\to A_P$.
Since this is a coproduct in $\_\Cat$, there exists a unique $P:A_P\to \%B$ for which $PA_{\^i^i}=P^i$ which we take to be the coproduct process.
We then construct the process injections $\^i^i:P^i\to P$ using the alternative maps $A_{\^i^i}$ together with the identity natural transformation $PA_{\^i^i}=P^i$.
That $(P,\^i^i)_{i\in J}$ has the required universal property is straightforward.
\end{proof}
Since $A_P$ introduces no morphisms between alternatives in $A_{P^i}$ and $A_{P^j}$ for $i\ne j$, prefix closure of each of the $P^i$ implies prefix closure of $P$.
The morphisms $\^i^i$ are all prefix preserving in a trivial way.
In the case of branching time processes, we have to modify the construction in the case where more than one $A_{P^i}$ contains an initial element $\bot$; these become identified in $A_P$.
Since there is already a stipulation about the schedules that may be associated with $\bot$ by $P^i$ if $P^i\in\PROCB{\%B}$, i.~e., only the initial schedule, the functor $P$ will continue to be well-defined and serve as a colimit in \PROCB{\%B}.
For \PROCB{\%B} we need to observe that a filtered diagram of objects in $A_P$ must lie entirely within the image of one of the $A_{\^i^i}$. Given this, the closure of $P$ follows from the closure of the $P^i$.
These observations suffice to show
\begin{corollary}
\PROCK{\%B}, \PROCB{\%B} and \PROCBC{\%B} have an initial object and
all small coproducts.
\end{corollary}
Notice that in order for coproducts to exist, there are no requirements
on $\%B$ aside from those we have already made for the sake of being able to
construct \PROCK{\%B}, \PROCB{\%B} and \PROCBC{\%B} at all.
The existence of coproducts relies, in some sense, strictly on the properties of \_\Cat.
The initial process $0$ is that for which the alternative set is empty.
Essentially, this is the deadlocked {\sc Stop} (or {\sc Halt}) process that cannot do anything at all
and thus is quite distinct from the {\sc Skip} process which is at least {\em able to do nothing},
i.~e., in the sense of being able to exhibit an idle behavior.
Coequalizers in the process categories, where they exist at all do
not, as far as we know, have any particularly pleasant characterization.
It is fortunate that we have no particular use for them.
\section{Process Operations}
There is now sufficient machinery to describe some basic operations on
processes analogous to those given for schedules.
We have essentially three tools for defining operations on processes.
\begin{itemize}
\item
Categorical limits and colimits,
which we shall use to provide definitions for the concurrence and choice operations on processes.
\item
Pointwise lifting of schedule functors,
which we have already seen for describing the lifting of alphabet renaming and restriction functors in the case of processes over labeled schedules.
Pointwise lifting will also be used to describe process orthocurrence.
\item
Iteration, i.~e., fixpoint constructions.
\end{itemize}
\subsection{Choice}
The union or {\em choice} operation is the one operation that is entirely new with the introduction of processes.
Intuitively, the process $P+Q$ is that which may either behave like $P$ or like $Q$.
We see almost immediately that the coproduct in the process category essentially fills this role by taking the
union of all of the alternative sets of the processes concerned without introducing any additional structure
within the component processes, except in the case of branch\-ing-time processes (\PROCB{\%B} or \PROCBC{\%B}),
where we may need to merge initial alternatives.
Clearly the {\sc Stop} or $0$ process is an identity for the choice operation.
\subsection{Concurrence}
We recall for a moment our interpretation in which a process is a specification of event occurrences that may be satisfied by a given observation and a process morphism is a proof of implication between such specifications.
Then, given a diagram of processes, an observation --- which itself can be viewed as a particularly tight specification --- that satisfies all of the specifications of the diagram in a consistent manner will essentially form a cone over that diagram.
Having a single specification or process to represent the entire diagram is just the notion of having a limit over that diagram.
Alternatively, the limit could be viewed as a most general observation that captures everything that is known in the diagram.
We saw, in the context of schedules that we are actually taking colimits,
since the schedule morphisms run opposite to the direction of the implications they represent.
{\em Process} morphisms, however, run parallel to the corresponding implications,
so the corresponding notion of system composition is given by a {\em limit} rather than by a colimit.
{\em Concurrence} is the special case of system composition in which we compose a number of entirely
independent processes,
i.~e., the diagram of processes being composed is discrete.
The concurrence of processes $P^1$,$P^2$,\dots will then be their product $P^1\times P^2\times\dots$,
this being analogous to the situation of schedule concurrence as a coproduct.
Applying the limit construction to a discrete diagram of two processes $P^1$ and $P^2$, we first see that the alternative set $A_{P^1\times P^2}$ is $A_{P^1}\times A_{P^2}$.
This is intuitively sensible in that choosing an alternative of the combined process is merely a matter of independently choosing an alternative from each of the component processes.
Given such a choice $p_1\in A_{P^1}$, $p_2\in A_{P^2}$,
the corresponding schedule will be the schedule coproduct $P^1p_1\|P^2p_2$;
i.~e., once we have picked a pair of alternatives, we form the concurrence of the schedules in the previous manner, so that every event of each schedule appears and no temporal contraints are introduced between events on distinct schedules.
As noted before we will, as a matter of clarity, denote the schedule concurrence using the previous notation $\|$
rather than $+$ for coproduct, in order that it not be confused with the process choice ($+$) operation.
\subsection{Orthocurrence}
The process orthocurrence $P\otimes Q$ can be obtained via a pointwise lifting
of schedule orthocurrences. That is, we take each possible choice of an
alternative of $P$ and an alternative of $Q$ and construct the corresponding
schedule orthocurrence.
However, this involves lifting a {\em binary} schedule operation and
we have only thus far discussed how one lifts schedule operations defined as {\em unary} functors.
If we can find some way to view schedule pairs as being schedules in their own right,
then the desired construction follows automatically.
First of all, given a pair of schedule categories \%B and $\%B'$,
we have a canonical injection functor $\<{-},{-}>:\PROC{\%B}\times\PROC{\%B'}\to\PROC{\%B\times\%B'}$
Given a pair of processes $P\in \PROC{\%B}$, $Q\in \PROC{\%B'}$,
each of which we recall is actually a functor, we can pair these to obtain
a functor, i.~e., a process $\<P,Q>:A_P\times A_Q\to\%B\times\%B'$ in $\PROC{\%B\times\%B'}$.
Process morphisms pair in a similar manner.
For pairings of prefix-closed processes we need to consider the
prefix classifiers $k$ and $k'$ for \%B and \%B', respectively.
In this case, we can use $\<k,k'>$ as a prefix classifier for $\%B\times\%B'$,
i.~e., a pair of maps defined to be a prefix map in $\%B\times\%B'$ iff both components are prefixes.
Then the injection constructed above will take pairs of prefix-closed processes to prefix-closed processes and similarly for morphisms.
Thus we also have a canonical injection $\<{-},{-}>:\PROCK{\%B}\times\%P_{k'}[\%B']\to\%P_{\<k,k'>}[\%B\times\%B']$.
If $P$ and $Q$ are branch\-ing-time processes, then $\<P,Q>$ necessarily has only one initial alternative, seeing as the initial schedule of $\%B\times\%B'$ is $\<0_{\%B},0_{\%B'}>$.
Likewise a paired prefix will be proper iff at least one of its components is proper.
These observations allow us to conclude that $\<P,Q>$ will be a branch\-ing-time process as well.
If $P$ and $Q$ are both closed branch\-ing-time processes, and \%B, $\%B'$ have all filtered colimits, then filtered colimits in $\%B\times\%B'$ will simply be pairs of colimits, $\<P,Q>$ is likewise a closed process, and again, a similar conclusion can be drawn for process morphisms.
Now we can recall the schedule orthocurrence functor $\otimes:\%B\times\%B\to\%B$,
and note that it satisfies the necessary properties for it to lift to a process functor.
${\otimes}_{\#}:\PROC{\%B\times\%B}\to\PROC{\%B}$
\begin{definition}
The {\em process orthocurrence} bifunctor $\otimes$ is the composition:
It should be noted that the process orthocurrence bifunctor itself provides $\PROC{\%B}$ with a monoidal structure,
one for which the corresponding identity is the process $I$ consisting of a single alternative whose schedule is $\:I$, the orthocurrence identity in \%B.
A more descriptive name for $I$ might be {\sc Bang} or {\sc Tick}
since the single underlying schedule is effectively one which has a single indeterminate event.
\subsection{Concatenation}
For concatenation, recall the universal construction we used for schedules.
The basic definition for concatenation on processes is essentially the same, albeit with reversed arrows and some careful consideration as to what the concatenation specifier should be.
In the following, $I_{\bot}$ is a two-alternative process in which the alternatives are ordered.
The earlier one, $\bot$, has an initial schedule ($I_{\bot}(\bot)=\:0_{\%B}$) while the other alternative $\top$ points to the schedule $\:I_{\%B}$.
In \PROCB{\%B} and \PROCBC{\%B}, we have $I_{\bot}=I+0$, but in the more general process categories, $I_{\bot}$ needs to be constructed explicitly to insert the $\bot<\top$ ordering in $A_{I_{\bot}}$.
A morphism $e:P\to I_{\bot}$ can be seen as picking out an ``event'' of $P$.
In slightly more detail, each alternative $p$ of $A_P$ is taken by $A_e$ either to $\bot$ or to $\top$.
In the latter case we then have to choose a schedule map $I\top(=\:I)\to Pp$,
i.~e., an actual event of $Pp$.
Moreover these choices must be such that if we choose an event for a given alternative, we must choose the corresponding event from each later alternative.
In the case of prefix-preserving maps, we must also choose the corresponding event from each earlier alternative as well.
In short, $e:P\to I_{\bot}$ picks out an event as it threads its way through the various alternatives of the process.
\begin{definition}
Given processes $P$, $Q$, and a map $d:2\to I_{\bot}\times I_{\bot}$ (a {\em concatenation specification}), a {\em pre-concatenation} $R=\<R,c_R,\^r_R>$ is
\begin{itemize}
\item
a process $R$, together with
\item
a process map $c_R:R\to P\times Q$, and,
\item
for every pair of maps $(e:P\to I_{\bot},e':Q\to I_{\bot})$, a particular schedule map $\^r_R(e,e'):R\to 2$ such that the following diagram commutes:
The {\em concatenation} $P\cat Q$, is a preconcatenation $\<P\cat Q,c,{-}\cat{-}>$ such that any preconcatenation $R$ factors through it, i.~e., there is a unique map $R\to P\cat Q$ making
This definition has essentially the same content as in the schedule case: for each possible choice of an event from $P$ and an event from $Q$ we introduce an additional constraint as specified by $d$.
It is only that the notion of picking an event from a process is complicated by there possibly being more than one schedule containing the event in question.
As with schedule concatenation, having the concatenation factor through all of the preconcatenations ensures that the resulting structure is minimal in the sense of not introducing any unnecessary constraints or superfluous events.
Again, as with schedule concatenation, we could express process concatenation in an equivalent manner as a single pullback
in which the bottom arrow is the product over all possible choices of pairs of maps $(e:P\to I_{\bot},e':Q\to I_{\bot})$.
Though it is not clear this would be any more illuminating than the original definition, it does provide, via Theorem \ref{thm:plim} and its corollaries, an assurance that the concatenation as defined always exists assuming \%B to be almost cocomplete.
This construction however will only yield a recognizable notion of concatenation if we make a sensible choice for $d:2\to I_{\bot}\times I_{\bot}$, much as the definition for schedule concatenation rests on the choice of the schedule concatenation specifier $\:d:\:I\|\:I\to \:2$.
Thus we should devote some attention to the question of
which map $d$ we should actually use.
$I_{\bot}\times I_{\bot}$ is a process with four alternatives, which we may denote, for lack of better names, $\:\OO,\:\OI,\:\IO,\:\II$.
Carrying through the product construction, the ordering on
$A_{I_{\bot}\times I_{\bot}}$ is the usual product ordering and the corresponding schedule diagram is
in which all of the maps are the canonical ones. If $\%B=\%D\hPhys$, then the \:I schedules really are singleton event schedules and the underlying event maps will be as shown.
At first we shall take $2$ to be just like $I_{\bot}\times I_{\bot}$ except that in the alternative \:\II, in which ``both'' events are present we introduce an extra temporal constraint between them. Otherwise the two processes are identical; in particular, we take $A_d$ to be the identity map.
As for ``introducing the extra temporal constraint'', this is done by taking the schedule $2(\:\II)$ to be the schedule concatenation $\:I_{\%B}\cat\:I_{\%B}$ rather than the concurrence as above. The underlying schedules and schedule morphisms of $2$ are thus
In this diagram, the maps on the bottom and the right are those of the previous diagram composed with the canonical map $\:I_{\%B}\|\:I_{\%B}\to\:I_{\%B}\cat\:I_{\%B}$ for schedule concatenation.
We also use the latter for $d_{\:\II}$, the one nontrivial portion of the natural transformation underlying the {\em process} morphism $d:2\to I_{\bot}\times I_{\bot}$.
\begin{proposition}\label{prop:concatstr}
$A_c:A_{P\times Q}\to A_{P\cat Q}$ is an isomorphism.
For any pair of alternatives $\<p,q>\in A_{P\cat Q}$,
the corresponding schedule will be $Pp\cat Qq$.
Likewise the schedule portions of the maps $c:P\cat Q\to P\times Q$
and $e\cat e':P\cat Q\to 2$ are obtained from the canonical diagram
for schedule concatenation
\begin{equation}\label{eq:pconcat}
\begin{diagram}
\:I\|\:I & \rTo^{e_p\|e'_q}& Pp\|Qq\\
\dTo^{\:d} && \dTo_{c_{\<p,q>}}\\
2 & \rTo^{{e\cat e'}_{\<p,q>}} & Pp\cat Qq\\
\end{diagram}
\end{equation}
\end{proposition}
\begin{proofo}
That $A_c$ is an isomorphism follows immediately from consideration of the pullback construction of concatenation (\ref{diag:concatpb}).
There, we notice that the two alternative sets on the right are isomorphic.
Since the alternative sets themselves form a corresponding pullback by the construction of Theorem \ref{thm:plim}, the two alternative sets on the left must be isomorphic as well.
$P\cat Q$ is indeed a preconcatenation,
since assembling all of the schedule maps $c_{\<p,q>}$ and ${e\cat e'}_{\<p,q>}$
extracted from the diagrams \ref{eq:pconcat}
does produce actual process morphisms $c$ and $e\cat e'$.
It then remains to be shown that any other preconcatenation $R$ factors through it (as in \ref{diag:preconcat})
Having $A_c:A_{P\cat Q}\to A_{P\times Q}$ be an isomorphism makes determining
the map $A_R\to A_{P\cat Q}$ a trivial matter.
We are left with the task of producing the underlying schedule maps $Pp\cat Qq\to Rr$ for each alternative $r$ of $R$ (where, in each case, $\<p,q>=A_{c_R}r$).
This is accomplished by showing that $Rr$ is a {\em schedule} preconcatenation and hence the desired schedule maps $Pp\cat Qq\to Rr$ exist and are unique by virtue of $Pp\cat Qq$ being a schedule concatenation.
Showing that $Rr$ is a {\em schedule} preconcatenation entails showing that for any pair of schedule maps $(\:e:\:I\to Pp,\:e':\:I\to Qq)$ we can find a map $\:e\cat\:e':\:2\to Rr$ making
$$\begin{diagram}
\:I\|\:I & \rTo^{\:e\|\:e'}& Pp\|Qq\\
\dTo^{\:d} && \dTo_{c_{\<p,q>}}\\
2 & \rTo^{\:e\cat\:e'} & Pp\cat Qq\\
\end{diagram}$$
commute.
\begin{lemma}
Given a process $P$, some alternative $p$ thereof, and
any schedule map $\:e:\:I\to Pp$, there exists a process map $e:P\to I_{\bot}$ such that $e_p=\:e$.
\end{lemma}
\begin{proofo}
Take $A_e(p')=\top$ if $p\le p'$, $\bot$ otherwise.
Likewise, take $e_{p'}=P(p\le p')\circ\:e$ if $p\le p'$ and the canonical initial map otherwise.
\end{proofo}
\noindent{\it Outline of Proof of Proposition \ref{prop:concatstr} (conclusion):\quad}
Given schedule maps $\:e$ and $\:e'$, we have corresponding process maps $e$ and $e'$ from which we can take advantage of $R$ being a process preconcatenation.
This produces a process map $\^r(e,e'):R\to 2$ and then one has to check that
the schedule map $\^r(e,e')_r:\:2\to Rr$ is exactly the map we want for $\:e\cat\:e'$.
\end{proofo}
This is the same notion of concatenation that appeared in the original pomset model, which in turn was inspired by that for trace theory. That is, $P\cat Q$ is the set of all possibilities involving a schedule of $P$ concatenated with a schedule of $Q$.
One may now notice a problem in applying this to the case of prefix ordered processes:
the map on the right side of diagram \ref{eq:2scheds}, i.~e., the one mapping $\:0\conc\:I_{\%B}\to\:I_{\%B}\cat\:I_{\%B}$, is {\em not} a prefix map.
That there is a problem should not actually be surprising since it is with the prefix ordering structure that we make any kind of connection between the alternative ordering and the temporal constraints. Consider that when we advance from an alternative $p$ to a later alternative $p'$, the new events of $Pp'$ will necessarily be after the ones that have already appeared in $Pp$. But when forming the process concatenation $P\cat Q$, one would hope to capture the sense in which nothing happens in process $Q$ until after $P$ is ``finished.''
In our context, given alternatives $p<p'$ in $A_P$ and $q<q'$ in $A_Q$, if we have an ordering $\<p,q>\le\<p,q'>$, then $\<p,q'>$ is a later stage of $A_{P\cat Q}$ at which we can arrive by ``doing something'' on $Q$.
The fact that the $Q$ portion of the process $P\cat Q$ is now active suggests that we should not then be able to have $\<p,q'><\<p',q'>$, i.~e., to be able to do something on $P$. In general, having $A_{P\cat Q}$ be isomorphic to $A_{P\times Q}$ turns out to be not so appropriate.
The immediate problem of $\:0\conc\:I_{\%B}\to\:I_{\%B}\cat\:I_{\%B}$
not being a prefix map is solved by using a different $2$,
one in which the troublesome ordering $\:\OI\le\:\II$ is simply not present in $A_2$.
This can easily be done since we only need to have $A_2$ mapping into $A_{I_{\bot}\times I_{\bot}}$;
they do not need to be isomorphic.
Then we can take the underlying schedule maps to be
and carry through the previous concatenation construction as before.
Consider for example the concatenation of two processes $P$ and $Q$ each with a linear set of alternatives, the first alternative of each being initial (the empty schedule).
Using the original version of $2$ and ignoring the prefix ordering requirement, i.~e., working in \PROC{\%B}, the alternative set $A_{P\cat Q}$ of the concatenation is exactly like that of the concurrence $A_{P\times Q}$.
Using the revised $2$ and working in the prefix ordered category \PROCK{\%B},
an argument similar to that of Proposition \ref{prop:concatstr} shows that underlying sets of $A_{P\cat Q}$ and $A_{P\times Q}$ are the same, though we can no longer expect the ordering to be the same as well.
The underlying object map of $A_{P\cat Q}$ and $A_{P\times Q}$ is thus fixed.
That $P\cat Q$ is at least a preconcatenation requires that for every pair of maps $(e:P\to I_{\bot},e':Q\to I_{\bot})$, we have a map $P\cat Q\to 2$, the monotonicity of which dictates that certain orderings not be present in $A_{P\cat Q}$.
So for any particular $(e, e')$ pair we can deduce the maximal order that $A_{P\cat Q}$ can have to allow for a map into $A_2$
in which we placed circles on those alternatives of $A_P$ and $A_Q$ that are sent by $A_e$ (resp. $A_{e'}$) to the noninitial alternative $\:I$ of $I_{\bot}$.
Note that if $A_ep=\:I$, then $A_ep'=\:I$ for all $p'\ge p$ by functoriality.
Combining all of the constraints on the $A_{P\cat Q}$ ordering imposed by all of the possible choices for $e$ and $e'$, we obtain the actual ordering for $A_{P\cat Q}$
\begin{equation}
\setlength{\unitlength}{0.0088in}
\begin{picture}(140,100)(30,720)
\thicklines
\put(195,740){\makebox(0,0)[l]{$A_{P\cat Q}$}}
\put( 70,770){\vector( 0,-1){30}}
\put( 70,740){\vector( 0,-1){30}}
\put( 70,710){\vector( 1, 0){30}}
\put(100,710){\vector( 1, 0){30}}
\put(130,710){\vector( 1, 0){30}}
\put(160,710){\vector( 1, 0){30}}
\put( 70,740){\vector( 1, 0){30}}
\put(100,740){\vector( 1, 0){30}}
\put(130,740){\vector( 1, 0){30}}
\put(160,740){\vector( 1, 0){30}}
\put( 70,770){\vector( 1, 0){30}}
\put(100,770){\vector( 1, 0){30}}
\put(130,770){\vector( 1, 0){30}}
\put(160,770){\vector( 1, 0){30}}
\put( 25,740){\makebox(0,0)[r]{$A_P$}}
\put( 30,740){\vector( 0,-1){ 30}}
\put( 30,770){\vector( 0,-1){ 30}}
\put( 30,775){\makebox(0,0)[b]{$\bot_P$}}
\put(130,820){\makebox(0,0)[b]{$A_Q$}}
\put(160,810){\vector( 1, 0){ 30}}
\put(130,810){\vector( 1, 0){ 30}}
\put(100,810){\vector( 1, 0){ 30}}
\put( 70,810){\vector( 1, 0){ 30}}
\put( 70,800){\makebox(0,0)[br]{$\bot_Q$}}
\end{picture}
\end{equation}
\noindent
though we omit the proof that this alternative order together with the usual schedule concatenations constitutes the actual process $P\cat Q$ in \PROCK{\%B}.
Notice that this ordering of $A_{P\cat Q}$ manages to omit exactly those pairs of alternatives for which the corresponding schedule map is {\em not} a prefix.
In the branch\-ing-time process category \PROCB{\%B}, since we know that there exists a construction of concatenation as a limit, this limit will in fact be a branch\-ing-time process if its components are.
For the concatenation construction to be usable in the category of closed processes \PROCBC{\%B} it suffices that schedule concatenation, itself a colimit, commute with the taking of filtered colimits, but this follows from colimits commuting with colimits in general.
\section{Iterated operations}
Finally, we have to contend with the class of iterated operations.
One such is the Kleene star or iterated concatenation, $P^{*}$, in which the alternatives are sequences of alternatives from $P$ while the corresponding schedules are the corresponding schedule concatenations.
Analogously, we also have the iterated concurrence $P^{\dagger}$ as introduced in \cite{Pr84}.
These, among others, can be constructed by taking the fixpoint of a suitable process functor.
In general, it will be useful to have criteria by which one may know that these fixpoints exist.
We start by stating a slight generalization of a result of Plotkin and
Smythe \cite{PS78}
\begin{theorem}\label{th:fix}
Given two categories \%C, \%D, the latter having $\omega$-colimits and
an initial object 0, along with a functor $F:\%C\times\%D\to\%D$
preserving $\omega$-colimits,
there exists an initial fixpoint $F^{\omega}:\%C\to\%D$.
That is, we have
\begin{itemize}
\item a functor $F^{\omega}:\%C\to\%D$ and
\item a canonical natural isomorphism $\tau:F\<\id_{\%C},F^{\omega}>\nato F^{\omega}$
that is initial in the category of pre-fixpoints of $F$, a pre-fixpoint being
a pair $(G,\gamma)$ in which $G:\%C\to\%D$ is a functor and
$\gamma:F\<\id{\%C},G>\nato G$).
\end{itemize}
Furthermore, if {\%C} has $\omega$-colimits, then $F^{\omega}$ preserves them
\end{theorem}
\begin{proofo}
We can fix an object $A$ of \%C, in which case the proof follows the
same line as in \cite{PS78}, that is, we take the colimit of
That this functor preserves $\omega$-limits is a matter of the
functors $1+{-}$ and ${-}\|{-}$ preserving $\omega$-limits.
By Theorem \ref{th:fix}, we then have a functor
\begin{eqnarray*}
{}^\dagger:\PROC{\%B}&\to&\PROC{\%B}\cr
P&\mapsto&P^\dagger\cr
\end{eqnarray*}
along with a canonical morphism $P\to 1+P\|P$ and the fact that
${}^\dagger$ also preserves $\omega$-limits.
In a similar manner, for a particular process $P$ we can define the
iterated concatenation $P^*$ as the fixpoint of
$$1+(P\cat\pi_2):\PROC{\%B}\to\PROC{\%B}$$
which also provides a canonical morphism $P^*\to 1+P\cat P^*$.
Since the $\cat$ functor only preserves filtered colimits in its second
argument, this is the best we can do. This construction is
applicable in \PROCK{\%B}, \PROCB{\%B}, and \PROCBC{\%B} as well.
However, in \PROCBC{\%B}, unlike in the other cases we have the closure
property. A closed process containing a particular ordered (a subcase of filtered) sequence of schedules must also contain the colimit schedule as an
alternative. We have all finite concatenations of schedules from $P$ as before, but now we also have all infinite concatenations of schedules from $P$ as
where each of the $P_i$'s is a process that ``computes the function
$F_i$'' in the sense to be described below. Our contention is that
our notion of system composition via taking the limit of this diagram
matches the least fixpoint semantics of Kahn, that is, taking the
limit of this diagram yields a process $P$ with morphisms
$P\to\Sigma_X^\leomega$ and $P\to\Sigma_W^\leomega$ such that $P$
computes the function $\overline{F}$.
\subsection{Input Locations}
For the purposes of this chapter we will be working within the
category of closed branching-time coherent processes (\CPROCBC{\%B}).
In addition we need to be able to have temporal constraints forcing
events to strictly precede other events, and to prohibit morphisms
from identifying such events that are in strict precedence.
Thus we need to have our underlying temporal domain be some $\%D\hPhys$.
Recall that we say an event $x$ {\em strictly precedes} another event
$y$ (write $x<y$) if there exists no morphism $\delta(x,y)\to I$.
We first characterize the \cite{GP} notion of {\em input location} in
our framework. Intuitively, a given component $Q$ of $P$, i.e., where
we have a morphism $f:P\to Q$, is an {\em input} if every alternative
of $Q$ has some alternative of $P$ making use of it.
\begin{definition}
A coherent process morphism $f:P\to Q$ is an {\em input} if
\begin{itemize}
\item
$A_Q$ has an initial element $\bot_Q$, $A_f^{-1}(\bot_Q)$ is nonempty
(i.e., there exist alternatives of $A_P$ mapped by $A_f$ to $\bot_Q$).
\item
Given $p\in A_P$, let $q=A_f(p)$.
If $q'\in A_Q$ is such that $q\le q'$, there exists
an extension $p'\in A_P$ (i.e., $p\le p'$) such that $A_f(p')=q'$.
Any such extension satisfies the following:
\begin{enumerate}
\item\label{inp:strict}
every event in $P(p')$ not already in $P(p)$ or $f_{p'}Q(q')$ is strictly
preceded by some event in $f_{p'}Q(q')$ not already in $f_pQ(q)$
(i.e., every new event of $p'$ is strictly preceded by one of the new
events of $q'$),
\item\label{inp:early}
if $Q(q)$ is empty (initial), $\delta(x,f_{p'}y)=0$ for all
$x\in P(p)$, $y\in Q(q')$ (i.e., the first input events can appear
arbitrarily early with respect to the events of $p$).
\end{enumerate}
\item
$f$ is a restriction, i.e., $f=t^{\#}$ for some injective
$t:\sigma Q\to\sigma P$.
\end{itemize}
\end{definition}
The essential idea is that the process $P$ has a behavior to follow even
if no input is received and any behavior involving a certain amount of
input can be extended if we receive more input. The last clause
ensures that $P$ does not introduce stronger temporal constraints on
the input events, other than those given by $Q$.
\subsection{Computing a function}
If we are to discuss how processes can compute functions, we first
need to say how we are representing the values of these functions.
\begin{definition}
A process $Q$ in \CPROCBC{\%B} {\em represents} a cpo {\_D} if $A_Q$ is
isomorphic to {\_D}.
\end{definition}
Essentially, $Q$ is a process describing one or more channels.
Elements of {\_D} are then histories of these channels.
The conditions on branching-time processes guarantee that when we
advance from one alternative of {\_D} to the next, new events will
appear.
Since we are dealing with labeled schedules, $I$ has a terminal alphabet
$\{\bullet\}$. For any alphabet \^G and any action $a\in\^G$, the translation $a:\{\bullet\}\to\^G$ lifts to a process translation which can then be applied to $I$ to get the single-event process $a^{\#}I\in\PROC{\%B_{\^G}}$. In fact the alphabet $\^G$ itself can be constructed as a process
$$\coprod_{a\in\^G}a^{\#}I$$
obtained by taking the coproduct in \PROC{\%B_{\^G}}.
It should then be clear that
\begin{proposition}
The process $\Gamma^{\leomega}$ represents the cpo $\Gamma^{\leomega}$.
\end{proposition}
\begin{proposition}
If $P,Q\in\CPROCBC{\%B}$ represent $\_D_P$ and $\_D_Q$ respectively,
$P\times Q$ represents $\_D_P\times\_D_Q$.
\end{proposition}
For the purposes of constructing diagrams representing Kahn networks,
we require that a given cpo $\_D$ always be represented in the same
way wherever it occurs, that is, all of the processes representing
$\_D$ in a particular diagram are canonically isomorphic.
For this it suffices to take
$\_D$ to always be a product of $\Sigma^\leomega$'s.
\begin{definition}
A process $P\in\CPROCBC{\%B_{\Sigma}}$ {\em computes a function}
$F:\_D_1\times\ldots\times\_D_n\to\_D_0$
if there exist coherent morphisms $f_i:P\to Q_i$ $(i=0\ldots n)$,
One then shows, for any event $y$ in $p_n$ not already in $p_{n-1}$,
that such a loop does not exist, on the assumption that there are no
such loops in $p_{n-1}$.
The possibilities for loops in $p_n$ are as follows.
The case $n=0$ is trivial since $a_0$ has no events.
For $n=1$, use condition~\ref{inp:early} in the definition of input morphism.
For $n>1$, use condition~\ref{inp:strict}, that successive events
$z,z'$ of $\Gleo(a_n)$ have $\delta(z,z')=I$ and that $f$ is a
restriction (so that
$\delta(\:f_{{\eighttt p}_n}z,\:f_{{\eighttt p}_n}z')=I$ as well).
Having shown that $R$ has alternatives for each of the elements of the
graph of $\Fbar$, it only remains to show that $g\tau:R\to X$ is an
input morphism. This can be left as an exercise as the arguments
involved are not too dissimilar from what has gone before.
\end{proofo}
To go from this to a general result is a matter of considering that
for any given system of functional equations, the least fixpoint we obtain
is the same, whether we compute it in stages progressively identifying one pair of variables at a time or identifying them all at once.
Likewise, in any diagram of processes like (\ref{pict:pnet}), we obtain the same result whether we compute the limit all at once or substitute equalizers \nobreak piecemeal.
\chapter{Directions for Future Work}\label{chapter:conc}
\subsubsection{Conjunctive Normal Form and Event Structures}
It has already been noted that our process specifications are essentially presented in ``disjunctive normal form''.
Can one devise any corresponding ``conjunctive normal form''?
This would be a model where the temporal relationships
(rather than the choices) now form the outer structure,
the disjunctive aspects being handled by some substructures where the
events used to be.
We consider a somewhat looser sense of CNF, taking a particular transformation one may perform on any prefix-ordered process $P$ to obtain a form of annotated schedule.
The first observation is that the constituent behaviors,
together with the morphisms relating them, form a diagram of type $A_P$.
In the case of a prefix ordered process,
prefix ordering guarantees that this diagram has a colimit.
Now as a matter of terminology, for each event $e$ of the colimit and each of the alternatives $a$ in $A_P$,
we say that $e$ is {\em present} in $a$ iff the image of the schedule $P(a)$ under the injection into the colimit schedule $E_P$ contains $e$.
The subset of all alternatives of $A_P$ in which $e$ is present we denote $\^f_P(e)$. Since the process is prefix ordered, an event present in $a$ will also be present in all ``later'' alteratives $a'$, i.~e., all alternatives $a'$ for which $a\le a'$; thus $\^f_P(e)$ is necessarily a {\em filter} (upward-closed subset) of $A_P$.
Essentially, we obtain a schedule $E_P$ together with an annotation $\^f_P:E_P\to\%F(E)$ of the schedule that indicates which subsets of events may collectively occur in any given scenario.
In the case of a partial order temporal structure, this looks very much like one of Winskel's {\em prime event structures} \cite{Win86}.
\begin{definition}
A prime event structure is a partially ordered set $(E,{\le})$ together with a
{\em consistency} predicate $\^z$ on finite subsets of $E$ satisfying
\begin{eqnarray}
&&\{e'|e'\le e\}\hbox{ is finite}\label{es:finite}\\
for all $e,e'\in E$ and all finite subsets $X,Y$ of $E$.
\end{definition}
If one is willing to ignore the property (\ref{es:finite}),
one may crudely derive an ``event structure'' from a prefix-closed process $P$ in $\PROCK{\_\Pos_{<}}$ by constructing the event set $E_P$ via a colimit as described above and then defining $\^z_P$ via
\begin{equation}
X\in\^z_P\iff\bigcap_{e\in X}\^f(e)\ne\emptyset
\end{equation}
that is, a finite subset $X\subseteq E$ of events is {\em consistent} iff there exists some alternative $a$ in which they are all present.
This reduction, however, loses information since, without any further conditions on the structure of $A_P$, there is, in general, no way to rederive the individual schedules $P(a)$ comprising the colimit schedule $E_P$.
Also from Winskel:
\begin{definition}
Given a prime event structure $(E,{\le},\^z)$, a subset
of $E$ is {\em consistent} iff all of its finite subsets are in $\^z$.
A subset $D$ is {\em left-closed} iff $e'\le e\land e\in D\implies e'\in D$ for all $e,e'$ in $E$.
\end{definition}
The set of all left-closed, consistent subsets of $E$ we denote $\_L(E)$, which can in turn be ordered by inclusion.
Clearly, $\_L(E)$ will include all of the original schedules\footnote{Actually, their images under the colimit injection into $E$} comprising $P$, but it will also include all {\em prefixes} of those schedules as well. There may also be some orderings in $\_L(E)$ that were not present in the original $A_P$.
One way out is to consider more restrictions on the structure of the
process $P$, i.~e., to somehow stipulate that for every alternative
$a$ of $A_P$, and every possible distinct prefix $\:p'$ of the
schedule $P(a)$, there exists a unique alternative $a'\le a$ such that
$P(a')=\:p'$. This is effectively what Rabinovich and Trakhtenbrot
do in their work on {\em configuration structures} \cite{RT}.
To formalize this in our framework, we first take note of a correspondence
Winskel derives between prime event structures and {\em finitary prime algebraic domains}. First we need a rapid review of some domain theory terminology:
\begin{definition}
Given a partial order $(D,\sqsubseteq)$,
\begin{itemize}
\item
A subset $D'\subseteq D$ is {\em finitely compatible} iff each of its finite subsets $X$ has a least upper bound $\bigsqcup X$ in $D$.
\item
$D$ is {\em consistently complete} iff all of its finitely compatible subsets have least upper bounds.
\item
An element $f\in D$ is {\em finite} iff for all directed subsets $S$ of $D$,
$f\sqsubseteq\bigsqcup S\implies f\sqsubseteq s$ for some $s\in S$.
\item
$D$ is {\em algebraic} iff for every element $d\in D$, we have
$d=\bigsqcup\{f\sqsubseteq d|f\hbox{ is finite}\}$.
\item
For any element $d\in D$, the {\em left closure}, $\lfloor d\rfloor$, is the set $\{e|e\sqsubseteq d\}$.
\item
$D$ is {\em finitary} iff $\lfloor f\rfloor$ is finite for all finite $f$
\item
An element $p$ is a {\em complete prime} of $D$ iff for any subset $D'$ with a least upper bound, $p\sqsubseteq\bigsqcup D'\implies \exists d\in D':p\sqsubseteq d$.
\item
$D$ is {\em prime algebraic} iff for every element $d\in D$, we have\hfil\break
$d=\bigsqcup\{p\sqsubseteq d|p\hbox{ is a complete prime}\}$.
\end{itemize}
\end{definition}
Now, for any finitary prime algebraic domain $(D,\sqsubseteq)$,
we can take the set of its primes $\_\Pr(D)$ ordered by $\sqsubseteq$,
and define a consistency predicate $\^z_{\_\Pr(D)}$ that contains all sets of primes having a least upper bound, at which point Winskel then proves (see \cite{Win86})
\begin{theorem}
\begin{itemize}
\item
For any finitary prime algebraic domain $(D,\sqsubseteq)$, we have that $(\_\Pr(D),\sqsubseteq,\^z_{\_\Pr(D)})$ is a prime event structure, and $(\_L\/\_\Pr(D),\subseteq)$ and $(D,\sqsubseteq)$ are isomorphic as partial orders.
\item
For any prime event structure $(E,{\le},\^z)$, $(\_L(E),\subseteq)$ is a finitary prime algebraic domain whose complete primes are the subsets $\lfloor e \rfloor$ for all $e\in E$ (and so $\_\Pr\/\_L(E)$ and $E$ are likewise isomorphic).
\end{itemize}
\end{theorem}
The essential point is that we can stipulate a class of processes $\%P_{pc}[\_2\hCat]$ which we call {\em prefix-closed} in which we require
the alternative set $A_P$ to be a finitary prime algebraic domain, and for all alternatives $a\in A_P$ either
\begin{itemize}
\item
$a$ is a complete prime and $P(a)$ contains exactly one event $e$ that is not in the image of any other $P(a')$ and is maximal in $P(a)$.
\item
$a$ is not a complete prime and all events in $P(a)$ are present in previous alternatives.
\end{itemize}
This class of processes, a subclass of branching-time processes is so severely restricted that building the colimit of the schedules, tossing away the
alternative set and remembering only the consistency predicate loses no information; one can reconstruct the original process.
Unfortunately, the restrictions are so severe that we cannot take advantage of any flexibility in choice of temporal domain \%D; all of the schedules perforce look like partial order schedules.
Clearly, the more annotation we keep on our events when converting a process to an annotated schedule, the fewer requirements that need to be placed on the structure of the process in order that this conversion be bijective.
The question of what annotation suffices, e.g., for branching-time or closed branching-time processes is an open one. Presumably this would yield
some form of ``live'' event structure that would be able to express the notion
of more than one event being forced to occur at a given stage, thus facilitating the discussion of liveness properties.
One area for possible improvement includes finding less restrictive notions
of prefix. The current prefix notion for {\_R\hPhys} stipulates that if
there is any upper bound at all from event $x$ to event $y$ and $x$ is
part of a prefix, so is $y$. This actually makes sense from the point
of view that if alternatives are merely ordered, one can say nothing
about the timing of decisions. If $y$ is to be not in the prefix but
part of the next stage, then $y$ must be allowed to occur an
arbitrarily long time after $x$ since the decision to advance to the
next stage may be indefinitely postponed. This suggests instead
exploring some form of further enrichment for the alternative set, one
which would not only provide orderings between alternatives, but also
give time limits on the decisions between them.
\appendix
\bibliographystyle{plain}
\begin{thebibliography}{10}
\bibitem{BCSW}
R.~Betti, A.~Carboni, R.~Street, and R.~Walters.
\newblock Variation through enrichment.
\newblock {\em J. Pure and Applied Algebra}, 29:109--127, 1983.
\bibitem{BA77}
J.D. Brock and W.B. Ackerman.
\newblock An anomaly in the specifications of nondeterministic packet systems.
\newblock Technical Report Computation Structures Group Note CSG-33, MIT Lab.
for Computer Science, November 1977.
\bibitem{Cas91}
Ross Casley.
\newblock {\em On the Specification of Concurrent Systems}.
\newblock PhD thesis, Stanford University, January 1991.
\bibitem{CCMP}
R.T Casley, R.F. Crew, J.~Meseguer, and V.R. Pratt.
\newblock Temporal structures.
\newblock In {\em Proc. Conf. on Category Theory and Computer Science, LNCS
389}, Manchester, September 1989. Springer-Verlag.
\newblock Revised version to appear in Math. Structures in Comp. Sci., {\bf
1}:2, 1991.
\bibitem{Flo62}
R.W. Floyd.
\newblock Algorithm 97: shortest path.
\newblock {\em Communications of the ACM}, 5(6):345, 1962.
\bibitem{GP}
H.~Gaifman and V.R. Pratt.
\newblock Partial order models of concurrency and the computation of functions.
\newblock In {\em Proc. 2nd Annual IEEE Symp. on Logic in Computer Science},